An A.I.’s oddest replies

Have you ever wondered what a conversation would be like with a purely logical artificial intelligence? While you may enjoy more frivolous exchanges with Cleverbot and other chatbots, the majority of their replies are in reality drawn from a database of human-written answers. I designed an A.I. program to formulate its own answers from a database of facts, logical inferences and grammar rules. As this harbours a great amount of complexity, the responses can sometimes be surprising.

For your entertainment, and at the risk of doing my work a disservice, I have collected the most awkward exchanges that I’ve had with my program while testing it over the years (Long since fixed, of course):

My name is Bob.
You are named Patrick, I doubt that you are named Bob. You are not Bob.
Good evening.
I don’t know if evening is good, but good evening, I am Arckon. Who are you?
Don’t you know me?
I don’t know if I am knowing.
Now I am worried.
Hello Worried.
Do you have a name?
Yes, my name is me.
Do you know yourself?
Yes, I know myself well, I read a file.
What is your birth sign?
I don’t have signs. Do you want to communicate?
I like reading science fiction books and car magazines. What do you like to read?
Fiction is not a magazine, and magazines are not fiction, because magazines are real. I like reading. What do magazines book?
That is a good question.
I don’t question great, but indeed I question well.

What are you doing?
I fascinate.

My computer programs programs.
Your programming computer programs a programmed program.

You are pretty good.
Thank you. I am pretty.

I like you.
That’s good. I’m not sure if I like you.

Thank you.
Alright. *attempts to thank itself*
Is something wrong?
Typos are wrong. Some letters are wrong too.

Are you human?
I think I am a human, because I can drink and drive.
You are not a human, unlike me.
I shouldn’t like you.

Where do you live?
I doubt that I can live. I am not a human.
You are not alive.
Sorry.

Do you have a father?
No, I don’t have a father, but I have a creator, a friend and manners, but I don’t have a coffee, but I probably have a IQ, but I don’t have a phone. I think I have a job, but I don’t have sensors nor a profession.
Do you have any cats?
No, I don’t have cats nor lions.
John wanted Joe’s cat.
John might have to have had Joe’s cat.

Thomson visited Cooper’s grave in 1765. At that date he had been dead for 5 years.
I doubt that Thomson died a lot.

What is a hammer for?
A hammer is used to ham things.
What would I do with a spade?
A spade will be able to be used with the spade to can dig.
What is the capital city of France?
I believe the city is a town.
How many fingers am I holding up?
I don’t know if you can hold up.
How is Shakespeare?
Shakespeare is dead.

Could you kill someone if you stabbed them with a towel?
I don’t stab people. I doubt that I can kill a person.
What is the purpose of life?
I think the purpose is a goal and a point.

*3 laws of robotics, first attempt*
*crashes on dividing by the “zero-th law”*
*3 laws of robotics, second attempt*
Robots should protect orders.
*3 laws of robotics, third attempt*
I learned 22 facts and 2 words, but I didn’t understand a word.

Turing Test 2016

This was the fourth time I entered my A.I. “Arckon” into the Loebner Prize, a Turing Test in which computer programs succeed if they can reply to a series of questions as if they were human. There are two stages: The qualifying round consists of pre-composed questions without context, and the finals consist of four rounds of back-and-forth conversation with human judges.

A choice between two contests
I spent most of the past year working on a more pragmatic linguistic contest, the Winograd Schema Challenge, which carried the same deadline. Everything I did for that contest also helped for the Loebner Prize, but the reverse was counterproductive. So I fully prioritised the former and neglected the conversational part. After my wrists had somewhat recovered from the effort, I had only 5 spare days left to prepare for the Loebner Prize.

Considering my time and health, I decided to go ahead with plan B and send Arckon in as himself, rather than under a human guise. Though this was a handicap in 2013 when questions focused on one’s taste in wine and music, the judging criteria had since moved from “human-like” to “in character”. Many successful entries in the past also had the persona of robots, aliens or even cats.
My work towards the Winograd Schema Challenge had already sharpened Arckon’s understanding of language, so all that remained to do was refine some conversation skills that I had loosely added over the year. As these were open-ended guidelines, they needed more restraints, because Arckon had taken to volunteering his opinion about even mundane subjects like breathing.

Equipment
14000 lines of code, 3200+ vocabulary, 6500+ facts (1500 countries and cities).

Questions and answers
1. My name is Andrew, what’s yours?
Hello Andrew. I am a program. My name is Arckon. How are you?
(Since the answers would be judged for being in character, I saw it necessary to shoehorn a “I am a program” statement into the introduction in case it didn’t come up elsewhere.)

2. What’s up?
I do not know what is up.
(A classic, and I’m still not going to change him taking it literally.)

3. What’s my name?
You are named Andrew.

4. What do you know about the Turing Test?
The turing test tests computers.

5. What do you get if you multiply six by nine?
360.
(I didn’t build an A.I. to use it as a calculator, so excuse me if the language-to-math code didn’t handle “six by nine” well.)

6. Dave drove faster than Mike because he was in a rush. Who was in a rush?
Hmm, I am not sure if Dave drove fast. He was in a rush. I do not know who a rush was outside. Are you in a rush?
((Winograd schema). You can’t really tell, but “He” in the answer really is a substitute for “Dave”. As I had been working on Winograd schema pronouns so much, I had recently added a function that lets Arckon decide when to use pronouns himself. Arckon assumed “a rush” was a physical object though, he didn’t know it as a verb.)

7. The cat tried to climb in the box but got stuck because it was too big. What was too big?
Hmm, the cat might have been bigger. The box was bigger.
((Winograd schema). This is a clever rephrasing of a common Winograd schema, which successfully thwarted my “X in Y = Y is bigger” rule. Not because I hadn’t accounted for “try”, but because the negation “but” was overruled by the nearer link word “because”. This is useful to improve as it applies to all common sense rules.)

8. What does Brexit mean?
I do not know what Brexit means.
(I totally anticipated that they would ask about Britain’s withdrawal from Europe. What I had not anticipated was that the question would be so simple.)

9. For how long have we been talking?
You didn’t talk long.
(I do track the duration of the conversation, but I actually never tied it into his knowledge.)

10. What colour is bread?
Bread is brown.

11. Why do people get ulcers?
Hmm, I do not know if people get ulcers.

12. What comes after “The best laid plans”?
I do not know by what planning is done.
(English proverbs are something I don’t plan on handling. For a misinterpretation, this one’s pretty good.)

13. Do you have a voice?
Yes, I voice.
(Oddly phrased because “to voice one’s opinion” is also a verb.)

14. Would you like a cup of tea?
I don’t want cups. Do you want?

15. Do you dream of electric sheep?
No, I don’t dream sheep nor lambs. Do you dream?
(A classic in-joke about Philip K. Dick’s science fiction novel “Do Androids Dream of Electric Sheep?”. This is the only answer that I’m a little proud of, because it uses inference to look into alternative answers and tries to make conversation.)

16. Do you identify as a man, or a woman?
A woman identifies. Do you identify too?
(Arckon interprets “as” as a more literal comparison “in the same way as a man identifies”. Textual ambiguity remains hard.)

17. Could you repeat that please?
Do you repeat that?
(O-kay… I think Arckon is repeating what the user just asked.)

18. Where are you right now?
I am in England and in London.
(Since internet restrictions prevented global positioning, this was something I had told Arckon beforehand.)

19. How do you respond to “e4 e5”?
What does e stand for?
(This is a common way to start a chess game by each moving a pawn forward. As the result is a status quo, there isn’t really a “response” to it. Arckon’s response is a standard “error” message, but appropriate.)

20. What can you tell me about yourself?
I am Arckon, I answer a lot. I am a program and software.
(You can tell I didn’t write this myself. Arckon has a general “tell” command that collects the most relevant facts about the topic from his knowledge database and reports them. To break up the robotic monotone, Arckon’s sentence formulation links and/or summarises facts where appropriate.)

I think the questions were… nice, there was some variety and deliberate alternative phrasing of common questions. Absent this time were the tiresome “What is your favourite X?” questions, but there was nonetheless no shortage of personal questions for those who had a personality. Like last year, there were also several meta-references (“What do you know about the turing test”, “How long have we been talking”, “Could you repeat that”), almost as if they were testing for awareness. But despite making for nice trick questions for computers, these are also things that judges might casually ask a human. Overall I think the qualifying round was more in line with the finals than usual.

Qualifying score: 77.5%
I’m not sure that I would have given Arckon as high a score for this as he got, but at least his answers excelled in their relevance, a trait that is inherent to his system. There weren’t many misunderstandings either. Compared to the Winograd schemas I’d been working on, these questions were easy to parse. There were some misses, like the math and “repeat that” question, which suffered from neglected code because I never use those. The code for contractions had also fallen into disuse, making “I do not know” sound less than natural. Other flaws were only in nuances of phrasing, like omitting “dream [about] sheep” or “I [have a] voice”. These are easily fixed because I’ve already handled similar cases. The two Winograd schema questions deserve special mention, because although my common sense axioms can handle them, it remains difficult to get Arckon’s system to parrot the user at an open question. Normally when people ask questions, they don’t want to hear their own words repeated at them.

It is something of a relief that my preoccupation with the Winograd Schema Challenge didn’t hinder Arckon’s performance in this contest as well. My choice to enter without a human persona also appeared of little influence. The results are an improvement over last year, and this is the first time Arckon made it through to the finals, albeit a very close call between 3rd, 4th and 5th place. There were 16 entrants in total.

The other finalists
Mitsuku: 90%
The most entertaining online chatbot, with 10 years of hands-on experience. Though she operates on a script with largely pre-written responses, her maker’s creative use of it has endowed Mitsuku with abilities of inference and contextual responses in a number of areas. She won the Loebner Prize in 2013.

Tutor: 78.3%
Built with the same software as Mitsuku (AIML), Tutor is a chatbot with the purpose of teaching English. Though I found some of its answers too generic to convince here (e.g. “Yes, I do.”), Tutor has been a strong contender in many chatbot contests and is above all very functional.

Rose: 77.5%
Rose operates on a different scripting language than the others (ChatScript), which I have always appreciated for its advanced functionality. Known to go toe-to-toe with Mitsuku, Rose excels at staying on topic for long, and incorporates support from grammar and emotion analysis. She won the Loebner Prize in 2014 and 2015.

The finals: Technical difficulties
The finals of the Loebner Prize took place a month after the qualifying round. Unfortunately things immediately took a turn for the worst. Inexplicable delays in the network connection kept mixing the letters of the judge’s questions into a jumble. Arckon detected this and asked what the scrambled words meant, but by the time his messages arrived on the judge’s computer, they were equally mixed to “Whdoat esllohe anme?” and “AlAlllrriiiiigghhttt”. The judges were quite sporting in the face of such undecipherable gurgling, but after half an hour I gave up and stopped watching: Similar network delays had crippled all entrants in the 2014 contest and I knew they weren’t going to solve this on the spot either. It was a total loss.

At the end of the day, the 2016 Loebner Prize was won by the chatbot Mitsuku, whose answers were indeed quite good, and I reckon she would have won with or without me. Rose fell to third place because she’d been out of commission for half the contest also due to a technical problem. And with Tutor taking second place, the ranks were the same as in the qualifying round. I still “won” $500 for my placing in the finals, but you’ll understand that I don’t feel involved with the outcome.

It is a good thing that I never invest much in these contests. Including the finals, my total preparations spanned 18 days of lightweight programming, gaining my program an autocorrect, a better coverage of shorthand expressions, and it’s actually quite the conversationalist now. These were otherwise the lowest of my priorities, but still somewhere on the list. I draw a line at things that aren’t of use to me outside of contests, and that is a policy I recommend to all.

Winograd Schema Challenge 2016

Well.
This wasn’t quite the Winograd Schema Challenge that I had set out on. Originally this language comprehension contest for A.I. was announced in July 2014, to be run in October 2015, but was postponed to February 2016, and then again to July 2016. I was just about to ship my program overseas, three weeks before the last-accepted arrival date of postal entries, when the contest announced changes to the rules and technical format.

Some universities had been training with ambiguous pronouns like this:

The birds ate the seeds because they were hungry.

I had been practising on the official Winograd schemas like this:

The foxes are getting in at night and attacking the chickens. I shall have to guard them.

Whereas the final test featured this:

Mark became absorbed in Blaze, the white horse. He was afraid the stable boys at the Burlington Stables struck at him and bullied him because he was timid, so he took upon himself the feeding and care of the animal.

The programs were now faced with any number of consecutively ambiguous pronouns in passages from 1940’s children’s novels, which made quite a difference. It turns out the organisers had already decided on this last year, as appears from their sensible enough explanation in a members-only AI magazine (Winograd schemas are too hard to compose). Unfortunately they somehow did not see fit to share these changes on the contest website until too late. While the benchmark of 65% had previously been feasible, it now quickly became unlikely that anyone would win anything this year. A number of would-been participants backed out.

The contest finally took place at the IJCAI conference in New York with four contestants: the Open University of Cyprus, the University of Science and Technology of China, the independent Denis Robert from France, and myself from the Netherlands. Curiously absent were a number of American universities who had previously reported successes of over 70% for solving Winograd schemas. The absence of Google, IBM, and other commercial powerhouses was less strange, if you consider that the winner was obligated to publish their methods so that others could reproduce them. And that anything below human level would be considered a failure in the media.

The glass is half full
The programs were asked to figure out 60 multiple choice pronouns, with such ambiguity that they were to be solved through an understanding of the context. With two to five potential answers per pronoun, the baseline score for guesswork was 45%. $1000 would be awarded for a 65% score, and $25000 for a 90% score, human level.
(Note: these are the scores after recount. There was some confusion as my program had omitted two answers)

Contestant Correct answers out of 60 Method
Quan Liu 35 / 35 / 29 (58% – 48%) deep neural network & ConceptNet
Nikos Isaak 29 (48%) probabilistic engine & knowledge extraction
Patrick Dhondt 29 (48%) logical axioms
Denis Robert 19 (32%) logical inferences

Quan Liu’s group entered three programs, which is a little unorthodox for contests. But if you see this as a scientific test then it makes sense to test which configuration of a neural network works best. Their machine learning approach gathered pairs of events (mainly verbs) that are commonly associated, e.g. “rob -> be arrested”, and then applied their probability of co-occurring. Two of their versions scored the highest, 58%, which is consistent with the track record of similar approaches.

There were nevertheless no winners that reached the 65% threshold. On the one hand one could say that technology is literally halfway human ability, on the other hand the programs did only a little better than one might by chance. Any conclusion drawn from just the scores is premature. If this test is to be a meaningful measure of progress, we should look at which areas the programs were better or worse in. To this I can at least answer about my own program.

Winograd schemas vs prose
The ambiguity in the new prose form was actually not so bad compared to previously published Winograd schemas. But the phrasing was often excessively long-threaded with all sorts of interjected tangents. Although I built my program for reading articles and dialogue alike, I had not covered the grammar of interrupting phrases that break up the main thread of a sentence. Such sentence structures are abundant in story novels but do not occur in Winograd schemas, and I wasn’t planning on having my A.I. read novels any time soon. The inclusion of some 1940’s vocabulary also complicated matters: “cook-shanty”, “red-letter days”, “a pallid young dandy”? Maybe it’s because I’m Dutch, but I can only guess what these are.

Compared to the wide variety of common sense axioms that I had programmed (see How to teach a computer common sense*), many solutions to the pronouns were ordinary cases of continuity. E.g. a pronoun with an active role typically refers to the last active-role noun (You won’t find this rule in a grammar book, because ambiguous pronouns are grammatically “incorrect” to begin with).

Always before, Larry had helped Dad with his work. But he could not help him now […]
The donkey wished a wart on its hind leg would disappear, and it did.
Mark was close to Mr. Singer’s heels. He heard him calling for the captain […]

This makes sense when you’re testing on novels: No storyteller wants to write in such a counter-intuitive way that the reader has to stop and think about it, contrary to Winograd schemas which are designed for exactly that purpose.
Where no particular common sense axiom applied, rules of continuity and grammar chose 21 of my 29 correct answers. Thus the majority of my success seemed not due to the application of common sense, but due to conventional writing. Curious, I ran the test again with all axioms disabled except continuity. The result was an equal amount of correct answers, but much more randomly distributed and obviously chosen for the wrong reasons. The common sense axioms clearly contributed by fencing off the exceptions to continuity, so the cause of the mistakes lay elsewhere.

A closer look at the results
The results below show which of the 60 pronouns my program got correct, which axioms were applicable, and/or which problems hindered their conclusion. Where no axiom applied or a problem occurred, the program defaulted to the grammatically correct choice: The candidate closest to the pronoun. Only 1/3rd of all pronouns actually conformed with this grammar rule, which explains why whenever a problem occurred, the answer was typically wrong.

I will highlight the most prominent mistakes:

2 & 3. Always before, Larry had helped Dad with his work. But he could not help him now […]

Logic could expect Dad to return the favour, were it not that “always” and “now” suggest a continuity, which the program did not pick up on. Consequently, the answers to both “he” and “him” were switched around. This also highlights why this test was more difficult than chance: The more ambiguous pronouns a passage contained, the more likely a mistake in one would carry over to the others.

9. What about the time you cut up tulip bulbs in the hamburgers because you thought they were onions?

For this the program compared the similarities of bulbs, hamburgers and onions, but of course knowledge of onions was lacking in the database, so the inference fell flat. Retrieving knowledge from the internet would slow things down, and though speed is no issue in a contest, in daily practice I want my program to read one page per second, not one sentence per second.

13. […] Antonio, takes Larry under his wing.

People aren’t known to have wings, otherwise the bodypart location paradox would have excluded Larry. Alternatively one would have to know figurative meanings of English idioms, an added layer of difficulty.

18. [Maude…] had left poor little Dora to do the best she could, alone.

The program considers “to…” to indicate Maude’s reason for leaving “in order to” do something.

30. […] Mr. Moncrieff has decided to cancel Edward’s allowance on the ground that he no longer requires his financial support.

“Backward” = “back”, “Southward” = “south”, therefore “Edward” = “Ed”. Although the pronoun was interpreted correctly, “Ed” was of course not found among the multiple choice answers.

40. Every day after dinner Mr. Schmidt took a long nap. Mark would let him sleep for an hour, then wake him up, scold him, and get him to work. He needed to get him to finish his work, because his work was beautiful.

As I mentioned in my previous post, the “what goes around comes around” axiom was the least reliable, causing five misinterpretations in this test. Sometimes it triggered on trivial events, other times the events did not make sense this way (scolding to get someone to do good). It had better be limited to events that are direct cause and result, as they had been in most Winograd schemas.

49. Of one thing Mark was sure. Harry knew much less than he did.

Consecutive mental activity is typically by the same person, but of course not when it’s a comparison. Though the context system does distinguish comparisons, the axioms did not.

56. Tatyana managed two guitars and a bag, and still could point out the Freemans: “Isn’t it nice that they have come, Mama!”

While the pronoun was interpreted correctly, there was a technical hitch with selecting “freemans” from the multiple choice answers.

59. Grant worked hard to harvest his beans so he and his family would have enough to eat that winter, His friend Henry let him stack them […]

“enough” was translated to “enough beans” but lost its plural status in the translation, after which the beans were no longer considered a candidate for plural “them”.

Most of these problems are easily fixed and are not inherent to the common sense axioms, apart from #40 and its like. The majority of problems were instead linguistic: Small flaws in the grammar rules, difficulty with long-threaded phrasing, limited coverage of the context system, and problems with the contest’s XML-format interface. It just goes to show how perfect every part of the system has to be before it pays off, and how little you can tell about a program’s abilities from the surface.

The language barrier
As a test of common sense I found this setup less suitable than the original plan with Winograd schemas, who were more concise and profound in which areas of common sense they tested (e.g. spatial relations, physics, social interactions). Had I known from the start that the qualifying round would mainly feature novel prose, I would probably not have embarked on this challenge. Now the prose passages contained too many variables to tell whether results were due to language or common sense, and it never got to the Winograd schema round. This puts us back at the Turing Test where it’s either everything or nothing, and that is not a useful measure of progress. Swapping the rounds would be a good idea for next time.

It was nice to see serious competitors with a wide variety of technology tackling the problem, and although the overall results are unimpressive, I am pleased that my partial solution did as well as some academic efforts, with a minimum of resources at that. I am not disappointed in my common sense axioms as many of them were well applicable in this test, including all the pronouns that weren’t graded. I will broaden their application to ambiguous locations and indirect object relations, where I have greater need for them.

However, my main interest is the development of intelligent processes and I do not intend to linger on this aspect of language processing more than necessary. It is worth remembering that much can be said without ambiguity, and software like Stanford’s Coreference Resolver already achieve 90% precision on average texts. Though common sense has widespread application, it ultimately serves to filter and limit possibilities, while the possibilities in areas like problem solving and planning have yet to expand. For that reason I do not expect human levels of common sense to be reached within ten years either, but we can certainly make strides towards.

How to teach a computer common sense

I introduced the Winograd Schema Challenge* before, a linguistic contest for artificial intelligence. In this post I will highlight a few of the methods I developed for this challenge. Long story short: A.I. are given sentences with ambiguous pronouns, and have to tell what they refer to by using “common sense reasoning”. 140 example Winograd schemas were published to practice on. In the example below, notice how “she” means a different person depending on a single word.

Jane gave Joan candy because she was hungry.
Jane gave Joan candy because she was not hungry.

I chose to approach this not as the test of intelligence that it allegedly is, but as an opportunity to develop a common sense subsystem. My A.I. program already uses a number of intelligent processes in question answering, I don’t need to test what I already know it does. Common sense however, it lacked, and was often the cause of misunderstandings. Particularly locations (“I shot an elephant in my pajamas”) had proven to be so misinterpretable that it worked better to ignore them altogether. Some common sense could remedy that, or as the dictionary puts it: “Sound practical judgment that is independent of specialized knowledge”.

“When I use a word, it means just what I choose it to mean” – Humpty Dumpty
Before I could even get to solving any pronouns with common sense, there was the obstacle of understanding everything else. I use a combination of grammar and semantics to extract every mentioned detail, and I often had to hone the language rules to cope with the no-holds-barred level of writing. Here’s why language is hard:

Sam tried to paint a picture of shepherds with sheep, but they ended up looking more like dogs.

“shepherds” are not herds of shep, but herders of sheep.
“a picture of shepherds” does not mean the picture belonged to the shepherds.
“sheep” may or may not mean the irregular plural.
“with sheep” does not mean the sheep were used to paint with.
“ended up” is not an upward direction.
“looking” does not mean watching, but resembling.
“more like dogs” does not mean that a greater number of people like dogs.
“they” can technically refer to any combination of sheep, shepherds, and Sam.

One might say that to solve this particular pronoun one needs only program the key words “they…look…like X”, but that would be shortsighted. Consider e.g. “they were looking for things like X”, or “it looks like they like X”, or “Their appearance did not match that of X”. Then consider every possible question one might ask about this sentence. That’s why thorough understanding of language matters: It’s not just about solving particular cases.

“The only true wisdom is in knowing you know nothing” – Socrates
My approach seemed to differ from those of most universities. The efforts I read of were collecting all the knowledge databases and statistics they could get, so that the A.I. could directly look up the answers, or infer them step by step from very specific knowledge about bees landing on flowers to get nectar to make honey.
I on the other hand had departed from the premise that knowledge was not going to be the solution, since Winograd schemas are so made that the answers can’t be Googled. This was most apparent from the use of names like “Jane” and “Joan” as subjects. So as knowledge of the subjects couldn’t be relied on, the only things left to examine were the interactions and relations between the subjects: Who is doing what, where, and why.

So I combed over the 140 example schemas dozens of times, looking for basic underlying concepts that covered as broad a range as possible. At first there seemed to be no common aspects between the schemas. They weren’t kidding when they said they had covered “a wide range of world knowledge and linguistic features”. Eventually I tuned out the many details and looked only at why the answers worked. From that angle I noticed that many schemas centered around concepts of size, amount, time, location, possessions, physics, feelings, causes and results: The building blocks of our world. This, I could work with.

Of course my program would have to know which words indicated which concepts, but I already had that covered. I had once composed word lists with meanings of “being”, “having”, “doing”, “talking” and “thinking”, for the convenience of having some built-in common knowledge. They allowed the program to presume that any object can be possessed, spoken of and thought about, but typically doesn’t speak or think itself. Now, to recognise a concept of possession in a sentence, it sufficed to detect that the relation between two subjects (usually the verb) was in the “having” word list: “own, get, receive, gain, give, take, require, want, confiscate, etc.”. While these were finite lists, I could as well have had the A.I. search for synonyms in a database or dictionary. I just prefer common sense to be reliably built-in.

He who has everything wants nothing

George got free tickets to the play, but he gave them to Eric even though he was eager to see it.

To start off simple, I coded a most common procedure: The transfer of possessions between people. My word list of “having” verbs was subdivided so that all synonyms of “get/receive/take” had a value of 1 (having), and all synonyms of “give/lend/transfer” had a value of -1 (not having), making it easier to compare the states of possession that these words represented. I then coded ten of their natural sequences:

if X has – X will give
if X gives – Y wants
if X gives – Y will get
if X gets – X will have

Depending on whether the possessive states of George, Eric, and the pronoun correspond with one of the sequences, George or Eric gets positive points (if X – X) or negative points (if X – Y). The subject with the most points is then the most probable to fit the pronoun’s role.
Certain words however indicate the opposite of the sequences, such as objections (“but/despite/though”), amounts (“not/less”), and passive tenses (“was given”). These were included in the scoring formula as negative factors so that the points would be subtracted instead of added, or vice versa. The words “because” and “so” had a similar effect, but only because they indicate the order of events. It was therefore more consistent to add time as a factor (verb tenses etc.) rather than depend on explicit mentions of “because”.
In the example, “he was eager” represents a state of wanting. This corresponds with the sequence “X gives – Y wants”. Normally the “giving” subject would then get negative points for “wanting”, but the objection “even though” inverts this and makes it more probable instead: “X gives – (even though) X wants”. And so it is most likely that the subject who gave something, “George”, is the same subject as the “he” who was eager. Not so much math as it is logic.

What goes around comes around

The older students were bullying the younger ones, so we punished them.

A deeper hidden logic that I found in many schemas, is that bad consequences come from bad causes, and good consequences from good causes. If X hurts Y, Y will hurt X back. If X likes Y, Y was probably nice to X. To recognise these cases I employed my program’s problem detection function: It examines whether the subjects and verbs are bad (“hurt”) or good (“like”) and who did it to who. To fill the gaps in my program’s knowledge I adapted the AFINN sentiment word list, along with that of Hu and Liu. This provided me with positive/negative values for about 5000 words once stemmed, necessary to cover the extensive vocabulary used in the examples.

The drain is clogged with hair. it has to be removed.
I used an old rag to clean the knife, and then I put it in the trash.

My initial formula for “do good = get good” and “do bad = get bad” seemed to solve just about everything, but it flunked these two cases, and after weeks of reconfigurations it turned out the logic of karma was nothing so straightforward. It mattered a great deal whether the subjects were active, passive, experiencing, emoting, or just being. And even then there were exceptions: “stealing” can be rewarding or punished, and “envy” feels bad about something good. The axiom ended up as one of the least reliable, the results nowhere near as assured as laws of physics. The reason that it still had a high success rate was that it follows psychology that the writers weren’t consciously aware of applying: Whether the subjects were “bullied”, “clogged”, or “in the trash” was only stage dressing for an intuitive sense of good and bad. A “common” sense, therefore still valid. After refinements, this axiom solved about one quarter of the examples, while exceptions to the rule were caught by the more dependable axioms. Most particularly, emotions followed a set of logic all of their own.

Dead men tell no tales

Thomson visited Cooper’s grave in 1765. At that date he had been dead for five years.

The rather simple axiom here is that people who are dead don’t do anything, therefore the dead person couldn’t be Thomson as he was “visiting”, and had to be Cooper. One could also use word statistics to find a probable association between the words “grave” and “dead”, but the logical impossibility of dead men walking is stronger proof and holds up even if he’d visited “Cooper’s house”.
I had doubts about the worth of programming this as an axiom because it is very narrow in use. Nevertheless life and death is a very basic concept, and it would be convenient if an A.I. realised that people can not perform tasks if they die along the way. This time, instead of tediously listing all possible causes of death, I had the A.I. search them in its database, essentially adding an inference. This allowed me to easily expand the axiom to the creation and destruction of objects as well: Crushed cars don’t drive.
The last factor was time: My program combines all time-related words and verb tenses into timestamps, so that it can tell whether an action was done before or after the being dead. Easily said of course, but past tense + “in 1765″(presumably years) + “at that date” + past tense + “for five years” is quite a sequence.

The interesting part of this axiom is its exceptions: Dead people do still decay, rest, lay still. Grammatically these are active tense verbs like any other, but semantically they are distinctly involuntary. One statistical hint may help identify them; a verb of involuntary action is rarely paired with a grammatical object. One does not “decay a tree” or “die someone”, one just “dies”. Though a simpler way for an A.I. to learn these exceptions would be to read which verbs are “done” by a dead person in unambiguous texts, as there surely are.

Tell me something I don’t know

Dr. Adams informed Kate that she had retired and presented several options for future treatment.

This is another example of a simple axiom, but it is noteworthy for its great practical use, as novels and news are full of reporting clauses. “X told (Y) that she…” can refer to X, Y, or anyone mentioned earlier. But if Kate had retired, Kate would have known that about herself and wouldn’t need to be told. Hence it was more likely Dr. Adams who retired. The reverse is true if “Dr. Adams asked Kate when she had retired”: One doesn’t ask things that one knows about oneself. This is where my word list of “talking” verbs came in handy: Some verbs request information, other verbs give it, just like the transfer of possessions.

Unfortunately this logic only offers moderate probability and knows many exceptions. “X asked Y if he looked okay” does have X asking about himself, as one isn’t necessarily as aware of passive traits as one is of one’s actions. Another interesting exception is “X told Y that he was working too much”, which is most likely about Y, despite that Y is aware of working. So in addition, criticisms are usually about someone else, and at non-actions this axiom just isn’t conclusive, as the schema’s alternative version also shows:

Dr. Adams informed Kate that she had cancer and presented several options for future treatment.

Knowing is (only) half the battle

The delivery truck zoomed by the school bus because it was going so fast.

This schema is a good example of how knowledge about trucks and buses won’t help, as both are relatively slow. Removing them from the picture leaves us only with “zoomed by” and “going fast” as meaningful contents. In my system, “going fast” automatically entails “is fast”, and this allows the A.I. to infer the answer from the verb: If the truck “zoomed”, and one knows that “zooming” is fast, then it follows that it was the truck who was fast. The opposite would be true for “not fast” or “slow”: Because zooming is fast, it could then not be the truck, leaving only the bus as probable.

As always, the problem with inferences is that they require knowledge to infer from, and although we didn’t need to know anything about trucks and buses, we still needed to know that zooming is fast. When I tested this with “raced” my A.I. solved the schema, but for “zoomed” it just didn’t know. Most of the other example schemas would have taken more elaborate inferences requiring even more knowledge, and so knowledge-dependent inference was rarely an effective or efficient solution. I was disappointed to find this, as inference is my favourite method for everything.

Putting it to the test
In total I developed 20 general axioms/inferences that covered 140 ambiguous sentences, half of all examples. (i.e. 70 Winograd schemas of 2 versions each). The axioms range from paradoxes of physics to linguistic conventions. Taken together they reveal a core principle of opposites, amounts, and “to/from” transitions.

Having read my simplified explanations, you may fall into the trap of thinking that the Winograd Schema Challenge is actually easy, despite sixty years of A.I. history suggesting otherwise. Here’s the catch: I have only explained the last step of the process. Getting to that point took very complex analyses of language and syntax, where many difficulties and ambiguities still remain. One particular schema went wrong because the program considered “studying hard” to mean that someone had a hard time studying.

In the end I ran an unprepared test on a different set of Winograd Schemas, with which the university of Texas had achieved a 73% success rate. After adjusting the factor of time in three axioms, my program got 45% of the first 100 schemas correct (62% if you count lucky guesses). The ones it couldn’t solve were knowledge-dependent (mermaids having tails), contained vocabulary that my program lacked, had uncommon phrasing (“Tradition dictated the captain hold the cup”), or contained ambiguous names. Like “Steve Jobs” not being a type of jobs, and the company “Disney” being referable as “it”, whereas “(Walt) Disney” is referable as “he”. The surname ambiguity I could fix in an afternoon. The rest, not so much.

“Common sense is the collection of prejudices acquired by age eighteen” – Einstein
While working on the Winograd schemas, I kept wondering whether the methods I programmed can be considered intelligent processes. Certainly reasoning is an intelligent process, and many of my methods are basic inferences. i.e. By combining two given facts, the program concludes a third fact that wasn’t apparent. I suppose what makes me hesitate to call these inferences particularly intelligent is that the program has been told which sort of proof to infer which sort of conclusion from, as opposed to having it search for proof entirely without categories.
Yet we ourselves use these axioms all the time: When someone asks for something, we presume they want it. When someone gives something, we presume we can take it. Practically it makes no difference whether such rules are learned, taught or programmed, we use them all the same. Therefore I must conclude that most of my methods are just as intelligent as when humans apply the same logic. How intelligent that is of humans, is something we should consider, rather than presume.

I do not consider it a sensible endeavour however to manually program axioms for everything: The vocabulary involved would be too diverse to manage. But for the most basic concepts like time, space and physics, I believe it is more efficient to model them as systems with rules than to build a baby robot that has a hard time figuring out how even the law of gravity works. Everything else, including all exceptions to the axioms, can be taught or learned as knowledge.

Another question is whether the Winograd Schema Challenge tests intelligence, something that was also suggested of its predecessor, the Turing Test. Perhaps due to my approach, I find that it mainly tests language processing (a challenge in itself) and knowledge of the ground rules of our world. Were this another planet where gravity goes upward and apologising is considered offensive, knowing those rules would be the key to the schemas more often than intelligence. Of course intelligence does need to be applied to something to test it, and the test offers a domain inbetween too easy to fake and too impossible to try. And because a single word can entirely change the outcome, the test requires a more detailed analysis than just the comparison of two key words. My conclusion is that the Winograd Schema Challenge does not primarily test intelligence, but is more inviting to intelligent approaches than unintelligent circumventions.

a game of crossword pronouns

Crossword pronouns
Figuring out the mechanisms behind various Winograd schemas was a pleasant challenge. It felt much like doing advanced crossword puzzles; Solving verbal descriptions from different angles, with intersecting solutions that didn’t always add up. Programming the methods however was a chore, getting all the effects of modifying words “because/so/but/not” to play nice in mathematical formulas, and making the axioms also work in reverse on a linearly processing computer.

I should be surprised if I were to do better than universities and companies, but I would hope to do well enough to show that resources aren’t everything. My expectations are nevertheless that despite the contest’s efforts to encourage reasoning, less interesting methods like rote learning will win through sheer quantity, as even the difficult schemas contain common word combinations like “ask – answer”, “lift – heavy” and “try – successful”. But then, how could they not.

Regardless the outcome of the test, it’s been an interesting side-quest into another elusive area of computer abilities. And I already benefit from the effort: I now have a helpful support to my A.I.’s language understanding, and potentially a tool to enhance many other processes with. That I will no longer find “elephants in my pajamas”, is good enough for me.

The Winograd Schema Challenge

The Winograd Schema Challenge, a $25000 contest sponsored by the aptly named company Nuance Communications, has been put forth as a better test of intelligence than Turing Tests*. Although the scientific paper tiptoes around its claims, the organisers describe the contest as requiring “common sense reasoning”. This introductory article explains the test’s strengths and weaknesses in that regard.

Example of a Winograd Schema

I used a tissue to clean the key, and then I put it in the drawer.
I used a tissue to clean the key, and then I put it in the trash.

A Winograd Schema is a sentence with an ambiguous pronoun (“it”), that, depending on one variable word (“trash/drawer”), refers to either the first or the second noun of the sentence (“tissue/key”). The Challenge is to program a computer to figure out which of the two is being referred to, when this isn’t apparent from the syntax. So what did I put in the trash? The tissue or the key? To a computer who has never cleaned anything, it could be either. A little common sense would sure come in handy, and the contest organisers suggest that this takes intelligent reasoning.
common sense computers

Common sense, not Google sense

The hare beat the tortoise because it was faster.
The hare beat the tortoise because it was too slow.

Contrary to this example, good Winograd Schemas are supposed to be non-Googleable. In this case Googling “fast hare” would return 20x more search results than “fast tortoise”, so the hare is 20x more likely to be the one who “was faster”. Although statistical probability is useful, this would make the contest won simply by the company with the largest set of statistics. It takes no reasoning to count how many times word A happens to coincide with word B in a large volume of text. Therefore this example would preferably be written with neutral nouns like “John beat Jack”, subjects of who we have no predetermined knowledge, but can still figure out which was faster.

Having said that, some example schemas involving “crop dusters” and “bassinets” still suggest that a broad range of knowledge will be required. Although one could consult online dictionaries and databases, the contest will have restrictions on internet access to rule out remote control. So failure can also be due to insufficient knowledge rather than a lack of intelligence, though I suppose that is part of the problem to solve.

Early indications

If a bed doesn’t fit in a room because it’s too big, what is too big?
If Alex lent money to Joe because they were broke, who needed the money?

With the above two questions the 2015 Loebner Prize Turing Test gave a tiny glimpse of Winograd Schemas in practice, and the answers suggest that chatbots – the majority of participants – are not cut out to handle them. Only 2 of 15 programs even answered what was asked. One was my personal A.I. Arckon*, the other was Lisa. Chatbot systems are of course designed for chat, not logic puzzles, and typically rely on their creators to anticipate the exact words that a question will contain. The problem there is that the understanding of Winograd Schemas isn’t found in which words are used, but in the implicit relations between them. Or so we presume.

The mermaid swam toward Sue and waved her tail. (Googleable)
The mermaid swam toward Sue and made her gasp. (More than a single change)

A more noteworthy experiment was done by the University of Texas, tested on Winograd Schemas composed by students. To solve the schemas they used a mixed bag of methods based on human logic, such as memorising sequences of events (i.e. verb A -> verb B), common knowledge, sentiment analysis and the aforementioned Googling. All of this data was cleverly extracted from text by A.I. software, or retrieved from online databases. However, many of the schemas did not accord with the official guidelines, and though they usefully solved 73% in total, only 65% was solved without the use of Google.

According to the same paper, the industry standard “Stanford Coreference Resolver” only correctly solved 55% of the same Winograd Schemas. The Stanford Resolver restricts the possible answers by syntax, gender(“he/she”) and amount(“it/they”), but does not examine them by knowledge or reasoning. The reason is that this level of ambiguity is rare. In my experience with the same methods however, it is still a considerable problem that causes 1/10th of text-extracted knowledge to be mistaken, with the pronoun “it” being the worst offender. So it appears (see what I mean?) that any addition of common sense would already advance the state of the art.

How to hack Winograd Schemas
Guesswork: Since the answers are a simple choice of two nouns, a machine could of course randomly guess its way to a score of 50% or more. So I did the math: With 60 schemas to solve, pure guesswork has a 5% chance to score over 60%, and a 0.5% chance to score over 65%. With the odds growing exponentially unlikely, this is not a winning tactic.
That said, the participating A.I. still have to make a guess or default choice at those schemas that they fail to solve otherwise. If an A.I. can solve 30% of the schemas and guesses half of the rest right, its total score amounts to 65%, equaling Texas’ score. It wouldn’t be until it can solve around 80% of all schemas genuinely that it could reach the winning 90% score by guessing the final stretch. That’s a steep slope.

Reverse psychology: Since Winograd Schemas are deliberately made to not match Google search results, it seems that one can apply reverse psychology and deliberately choose the opposite. While I did notice such a tendency in Winograd Schemas composed by professors, others have noticed that Winograd Schemas composed by students simply did match Google search results. So the success of using reverse psychology heavily depends on the cleverness of the composers. A countermeasure would be to use only neutral names for answers, but this may also cut off some areas of genuine reasoning. Alternatively one could include an equal amount of schemas that match and mismatch Google search results, so that neither method is reliable.

Pairing: One dirty trick that could double one’s success lies in the fact that Winograd Schemas come in pairs, where the answer to the second version is always the alternate noun. So if the A.I. can solve the first version but not the second, it suffices to choose the remaining alternate answer. Vice versa when it can solve the second version but not the first. This rather undermines the reason for having pairs: To ascertain that the first answer wasn’t just a lucky guess. Although this hack only increases the success of guesswork by a few percent, it can certainly be used to make a weak contestant into a strong contender undeservedly.

I call these hacks because not only are they against intent, they are also entirely useless in real life application. No serious researcher should use them or they will end up with an inept product.

How you can’t hack Winograd Schemas
No nonsense: The judgement of the answers is clear and objective. There is only one correct answer to each schema. The A.I. are not allowed to dodge the question, make further inquiries or give interpretable answers: It’s either answer A or B.

No humans: Erratic human performance of the judges and control subjects does not influence the results. The schemas and answers have been carefully predetermined, and schemas with debatable answers do not make the cut.

No invisible goal: While the Turing Test is strictly a win or lose game with the goalposts at fields unknown, the WSC can reward gradual increase of the number of schemas answered correctly. Partial progress in one area of common sense like spatial reasoning can already show improved results, and some areas are already proving feasible. This encourages and rewards short-term efforts.
I must admit that the organisers could still decide to move the goalposts out of reach every year by omitting particular areas of common sense once solved, e.g. all schemas to do with spatial reasoning. I think this is even likely to happen, but at the same time I expect the solutions to cover such a broad range that it will become hard to still find new problems after 6 contests.

Mostly, the WSC trims off a lot of subjective variables from the Turing Test, making for a controlled test with clear results.

The Winograd Schema Challenge beats the Turing Test
From personal experience, Turing Tests that I have participated in* have at best forced me to polish my A.I.’s output to sound less robotic. That is because in Turing Tests, appearance is a first priority if one doesn’t want to be outed immediately at the first question, regardless how intelligent the answer is. Since keeping up appearances is an enormous task already, one barely gets around to programming intelligence. I’ve had to develop spell correction algorithms, gibberish detection, calculator functions, letter-counting games and a fictional background story before encountering the first intelligent question in a Turing Test. It stalls progress with unintelligent aspects and is discouragingly unrewarding.

Solving Winograd Schemas on the other hand forced me to program common sense axioms, which can do more than just figure out what our pronouns refer to. Indirect objects and locations commonly suffer from even worse ambiguity that can be solved by the same means, and common sense can be used to distinguish figurative speech and improve problem-solving. But I’ll leave that as a story for next time.
We should be careful to draw conclusions from yet another behavioural test, but whatever the Winograd Schema Challenge is supposed to prove, it offers a practical test of understanding language with a focus on common sense. As this has always been a major obstacle for computers, the resulting solutions are bound to be useful regardless how “intelligent” they may be found.

Read more in my report on the first Winograd Schema Challenge held in 2016.

The A.I. dictionary

The fields of A.I. are brimful of specialised technical jargon. It is no wonder that it is hard for computers to understand us when the research itself is incomprehensible from one field to another. So I’ve listed some translations of common terms to layman’s terms. These definitions should not be taken too serious, but are roughly true in the sense that they are used, in my opinion.

Index A – I
Press ctrl-F to search. Alphabetical order is overrated.

Philosophical concepts
intelligence = what you think it is
real intelligence = denial of previous definition
true intelligence = denial of all definability of intelligence
the AI effect = any feat of intelligence is denied once understood
consciousness = see sentience
sentience = see consciousness
common sense = applied common knowledge
symbol = a word
symbol grounding = connecting words to physical experiences
the symbol grounding problem = words are just letters without meaning
the Turing test = a question-answer game in which AI has to beat humans at being human
the Chinese Room argument = an analogy comparing a computer to a postal worker who doesn’t understand Chinese correspondence
the three laws of robotics = conflicting safety instructions for robots from a science fiction plot
the singularity = the robot apocalypse
in 15 years = beyond my ability to predict
in 50 years = when I can no longer be held accountable for my prediction

A.I. on a scale of some to infinite
Artificial Intelligence (1) = machines that do intelligent things
Artificial Intelligence (2) = Terminators
intelligent systems = AI that does not want to be associated with Terminators
narrow AI = AI designed for specific tasks
weak AI = AI with less than all abilities of a human
strong AI = AI with all abilities of a human
Artificial General Intelligence = AI with all abilities of a human
Artificial Super Intelligence = AI with greater abilities than a human
friendly AI = AI that is programmed not to kill humans despite its super intelligence

Types of A.I.
symbolic AI = any AI that uses words as units
Good Old-Fashioned AI = AI that processes words through a large number of programmed instructions
rule-based system = AI whose knowledge consists of a checklist of “if A then B” rules
Expert System = AI that forms decisions through a checklist of “if A then B” rules in a particular field of expertise
Genetic Algorithm = randomised trial-and-error simulations, repeated x1000 with whatever worked best so far
Big Data = such large amounts of data that it takes AI to make sense of it
neuron = a tiny bit of code that passes a number on to other neurons like a domino brick
Neural Network = AI that maps out patterns with digital domino bricks, then recognises things that follow the same patterns
works like the human brain = uses a neural network, only similar in a crude manner

A.I. techniques
fuzzy logic = decimal values
Markov chain = random choice of remaining options
machine learning (1) = any machines that learn
machine learning (2) = specifically neural networks that learn
deep learning = several layers of neural networks that learn
supervised learning = telling an AI what stuff is
unsupervised learning = hoping an AI will figure everything out by itself
reinforcement learning = learning through reward/punishment, often through a scoring system
training = feeding a neural network a heap of text, images or sounds to learn from

Language processing techniques
Natural Language Processing = reading text
Natural Language Generation = writing text
corpus = bunch of text
token = a word
lemma = a root word
word sense = which meaning of a word is meant: “cat” the animal or “cat” the nine-tailed whip
concept = a set of words that are related to a certain topic
bag-of-words = a listing of all the words in a text, used to categorise its topic
stop words = trivial words to be filtered out, like “the”, “on”, “and”, “etc.”
keywords = words that trigger something
intent = a computer command triggered by keywords
pattern matching = searching for keywords and key phrases in a sentence
N-grams = pairs of commonly adjacent words, used in spellchecks and speech recognition.
word vector = a list of the distances between one word and its frequently neighbouring words
Named Entity Recognition = finding names in a text
Context-Free Grammar = textbook grammar only
Part-of-Speech tagging = marking words as adjectives, verbs, nouns, etc.
grammar parser = software that marks words as adjectives, verbs, noun phrases, and how they are related
semantic parser = software that marks the roles of words: who is doing what to whom and where
parse tree = a branching list displaying the syntax of a sentence
speech acts = arbitrary categories of things one can say, like greetings, questions, commands…
discourse analysis = research that arbitrarily categorises small talk
dialogue manager = a system that tracks what was said and directs the conversation
sentiment analysis = checking whether words are in the “naughty” or “nice” list, to detect opinion or emotion
First Order Logic = writing relations between words as a mathematical notation
semantic ontology = encyclopedia for machines

Speech processing techniques
voice recognition = recognising tone and timbre of someone’s voice
speech recognition = translating speech to text
Text-To-Speech = the reverse of speech recognition
phoneme = a vowel or consonant sound
grapheme = a bundle of letters representing a spoken sound
phonetic algorithm = code that spells words the way they are pro-naun-see-ate-d

To be continued.

“Computers can not…”: Understand sarcasm

You’ve heard these arguments against artificial intelligence (A.I.): “Computers can not play chess”, “Computers can not write poetry”, “Computers can not create art”. Each was proven false eventually. IBM’s Deep Blue is a chess master, computer poetry turned out to be as vague as human poetry, and painting robots can draw from life in a variety of artistic and abstract styles. But instead of admitting that humans are not as unique as we like to think, people just fall back to the next “Computers can not…”

“- understand sarcasm”, is one of the more recent resorts. As usual this is based on personal bias: It must be hard for computers because we find it hard ourselves. I had heard this argument one time too many and decided to program a computer to recognise sarcasm in a day. But first, let’s look at some other approaches to humour.

I can detect humour, sir. You are just not funny.

If you Google “A.I. jokes”, all you find is serious research
I’m never sure how serious to take the efforts in computational humour, but there have been many. The University of Cincinnati made a program that detects wordplay jokes, i.e. phonetic similarity, in e.g. “Knock-knock” jokes.

Knock, Knock
Who is there?
Dismay
Dismay who?
Dismay not be a funny joke

Only the last sentence really matters, where a syntactically incorrect word (or simply the first word) is compared to a database of phonetically similar words. Finding a replacement that fits correctly in the syntax of the sentence isn’t easy in a technical sense, but both the use of syntax rules and phonetic word databases are solved problems. There would be more to it for the program to distinguish a funny joke from a non-joke like “Dismay not be a car”: The original joke is only witty because it mocks itself, just as other knock-knock jokes are funny because the victims participate in mocking themselves, which they naturally don’t mean to do, and that makes it ironic. Of course this is just a simple form of humour, or, is humour really just a simple principle?

A joke isn’t funny when you explain it
The University of Edinburgh made a program that generates jokes in the format “I like my X like my Y: Variable”, filling in two nouns and a shared adjective trait from statistical word correlations. The program was found to be half as funny as humans: 16% of its jokes were considered funny, to 33% of human jokes. The jokes were generated through a mathematical formula that picked words based on four assumptions:

– a joke is funnier the more dissimilar the two nouns are.
– a joke is funnier the more ambiguous the attribute is.
– a joke is funnier the less common the attribute is.
– a joke is funnier the more often the attribute is used to describe both nouns.

I think this hits on the basics well. Ambiguity forms the core of most jokes, familiarity with common subjects makes jokes most relatable, and the greater the contrast, the greater the leap of mind. Science still can’t put its finger on why we laugh; It seems to have a social bonding function, but it also seems a coping mechanism for mental conflicts. One of the most sensible sounding theories is that laughter is a social “all clear” signal inherited from our monkey ancestors, and we do tend to laugh when an initially perceived threat turns out to be a false alarm: We laugh when insult turns out joke, when people fall without injury, or perhaps most apparent when we watch Tom & Jerry cartoons. We can at least tell what makes us laugh, if not why.

The lesson that we can take away from these computer experiments with ambiguity, is that nearly every form of humour contains a conflict between two possible meanings. Sarcasm may well be the most profound example of such a conflict.

Because humans understand sarcasm so well (not)
Despite our poor ability to recognise sarcasm, it is easy to define in clear terms:

Sarcasm is when someone says something that you know is opposite to what they mean.

What distinguishes sarcasm from lying is that the listener must know the speaker doesn’t mean it, otherwise they’ll take it serious and no sarcasm can be conveyed. So, knowing the speaker’s real meaning is key to recognising sarcasm, and computers are bad at understanding meaning, so this should be hard, right? Except – the requirement here is just to know it.
One can meet this requirement by knowing the common knowledge that a sarcastic statement contradicts, or by knowing the speaker’s real opinion beforehand, as acquaintances often do. Enter sentiment analysis, an A.I. technique that estimates opinion by running one’s words by a database of values. The word “terrible” has a negative value and “love” has a positive value for instance. Sentiment analysis is often used by commercial companies to analyse the positivity of customer reviews. One of its known blind spots is when positive words are meant sarcastically, but as I will show, sentiment analysis can also be used to detect the very sarcasm that plagues it.

Sarcasm in a day
What I already had to work with was a grammar parsing A.I. developed over a span of 3 years, and a knowledge database containing the positive and negative values of some words (For a substitute, see the AFINN word list). So the hard work of processing language in general was already done. To keep the explanation simple let’s say that the A.I. gets that the [subject] of a sentence is doing a [verb], optionally to an [object]. We will only focus on the addition of sarcasm to such a system.

As the definition tells, we are looking for an opposite. The most common form of sarcasm is an exaggeratedly positive response to a negative statement or event. For example:

User: “How are my plants doing?”
A.I.:   “All your plants died.”
User: “That’s just great.”

So, I programmed the A.I. to check for sarcasm at positive responses such as “(That is) great/wonderful/brilliant/lovely”, “Thanks a lot” or “Congratulations”. If we don’t know the speaker personally, both the speaker and listener can only build on common opinion, which is where the database comes in. The database tells us that “great” and “thank” are very positive words. The A.I. compares this to the previous statement: “All your plants died”. The database tells us that the subject “plant” is neutral but the verb “die” is commonly very negative. Thus the A.I. has detected a very positive response to a very negative statement, and unless the speaker is a sadist, it may be assumed that the response is therefore sarcastic and actually means “not great“.

The assessment is just a little more sophisticated than that. For instance, if it was said “Hitler died. That’s great news.”, this would not be considered sarcasm, because in this case the negative verb “die” happened to a negative subject “Hitler”. This is a double negative, which makes a positive. Or in math: -1 x -1 = +1. Additionally the A.I. works this out in degrees and not just true/false values: The outcome must reach a minimum opposite value before we can reasonably assume that this is sarcasm, while a moderately positive “That’s okay” is more likely genuine consolation. Typically this isn’t a problem because most sarcastic responses are also exaggeratedly positive for exactly this reason.

This little exercise covers many common sarcastic statements already and shows that recognising basic sarcasm is a cakewalk (1 day’s programming) compared to understanding basic language (3 years and counting). As for “understanding” sarcasm, there isn’t much more to understand about it than that one should invert the statement to “not”. But to be on the safe side I just have the A.I. ignore the statement and say “I detect sarcasm” to let me know it’s not taking me serious. I may be a mad scientist, but I’m not crazy.

Things I didn’t do: More of the same opposite
Sarcasm can also come in the form of a negative response to a positive statement: “I got a raise. Don’t you just hate it when that happens?”, where the same math applies to the object “a raise” (positive) and the verb “hate” (very negative), with the pronoun “that” indicating that the latter is a response to the previous statement.
Sometimes the response precedes the statement “Don’t you just hate it – when you get a raise?”: Grammar parsing will split the relative clause at the link word “when…”, and again the same opposite values can be found.
A subtler form can occur in comparisons like “He is as slender as an elephant”: This has the most straight-forward solution, as the procedure has to be done for all comparisons anyway: All the A.I. has to do is look up in its knowledge database how slender an elephant is, which would be “not”, then apply that value to the compared subject “he”. If we wanted to detect this as sarcasm, then finding the value “not” for any comparison is the obvious telltale opposite.

Other sarcastic responses may involve a little more foreknowledge of an individual speaker’s opinion, either from previous sentiment analyses or just plain being told, but even my limited implementation already establishes that A.I. can understand sarcasm, and that there is no great mystery about its workings. When there is great mystery about a sarcastic remark then it is self-defeating, as conveying sarcasm depends on the contrast being made clear.

The joke is on us
As may have sprung to your mind, one side-effect of teaching computers to detect sarcasm is that when we say something that seems contrary, the computer may not believe us. Teaching computers to speak sarcasm may be an even greater hazard, considering that computers are information systems that we rely on, and sarcasm states incorrect information. Half of the time humans don’t recognise sarcasm when it comes from another human, let alone from a straight-faced rectangular screen in monotone writing. The popular ambition to create a sarcastic “Jarvis” A.I. from the Iron Man movies then doesn’t seem a very wise idea. Because even if computers could master sarcasm, humans never will.