How to teach a computer common sense

I introduced the Winograd Schema Challenge* before, a linguistic contest for artificial intelligence. In this post I will highlight a few of the methods I developed for this challenge. Long story short: A.I. are given sentences with ambiguous pronouns, and have to tell what the pronouns refer to by using “common sense reasoning”. 140 example Winograd schemas were published to practice on. In the example below, notice how “she” means a different person depending on a single word.

Jane gave Joan candy because she was hungry.
Jane gave Joan candy because she was not hungry.

I chose to approach this not as the test of intelligence that it allegedly is, but as an opportunity to develop a common sense subsystem. My A.I. program already uses a number of intelligent processes in question answering, I don’t need to test what I already know it does. Common sense however, it lacked, and was often the cause of misunderstandings. Particularly locations (“I shot an elephant in my pajamas”) had proven to be so misinterpretable that it worked better to ignore them altogether. Some common sense could remedy that, or as the dictionary describes it: “Sound practical judgment that is independent of specialized knowledge”.

“When I use a word, it means just what I choose it to mean” – Humpty Dumpty
Before I could even get to solving any pronouns with common sense, there was the obstacle of understanding everything else. I use a combination of grammar and semantics to extract every mentioned detail, and I often had to hone the language rules to cope with the no-holds-barred level of writing. Here’s why language is hard:

Sam tried to paint a picture of shepherds with sheep, but they ended up looking more like dogs.

“shepherds” are not herds of shep, but herders of sheep.
“a picture of shepherds” does not mean the picture belonged to the shepherds.
“sheep” may or may not mean the irregular plural.
“with sheep” does not mean the sheep were used to paint with.
“ended up” is not an upward direction.
“looking” does not mean watching, but resembling.
“more like dogs” does not mean that a greater number of people like dogs.
“they” can technically refer to any combination of sheep, shepherds, and Sam.

“The only true wisdom is in knowing you know nothing” – Socrates
My approach seemed to differ from those of most universities. The efforts I read of were collecting all the knowledge databases and statistics they could get, so that the A.I. could directly look up the answers, or infer them step by step from very specific knowledge about e.g. bees landing on flowers to get nectar to make honey.
I on the other hand had departed from the premise that knowledge was not going to be the solution, since Winograd schemas are so composed that the answers can’t be Googled. This was most apparent from the use of names like “Jane” and “Joan” as subjects. So as knowledge of the subjects couldn’t be relied on, the only things left to examine were the interactions and relations between the subjects: Who is doing what, where, and why.

So I combed over the 140 example schemas dozens of times, looking for basic underlying concepts that covered as broad a range as possible. At first there seemed to be no common aspects between the schemas. They weren’t kidding when they said they had covered “a wide range of world knowledge and linguistic features”. Eventually I tuned out the many details and looked only at why the answers worked. From that angle I noticed that many schemas centered around concepts of size, amount, time, location, possessions, physics, feelings, causes and results: The building blocks of our world. This, I could work with.

Of course my program would have to know which words indicated which concepts. I had already once composed word lists with meanings of “being”, “having”, “doing”, “talking” and “thinking”, for the convenience of having some built-in common knowledge. They allowed the program, for instance, to presume that any object can be possessed, spoken of and thought about, but typically can not speak or think itself. Now, to recognise a concept of possession in a sentence, it sufficed to detect that the relation between two subjects (usually the verb) was in the “having” word list: “own, get, receive, gain, give, take, require, want, confiscate, etc.”. While these were finite lists, one could also have the A.I. search for synonyms in a database or dictionary. I just prefer common sense to be reliably built-in.

He who has everything wants nothing

George got free tickets to the play, but he gave them to Eric even though he was eager to see it.

To start off simple, I programmed an axiom for a very common procedure: The transfer of possessions between people. My word list of “having” verbs was subdivided so that all synonyms of “get/receive/take” had a value of 1 (having), and all synonyms of “give/lend/transfer” had a value of -1 (not having), making it easier for a computer to compare the states of possession that these words represented. I then coded ten of their natural sequences:

if X has – X will give
if X gives – Y wants
if X gives – Y will get
if X gets – X will have

Depending on whether the possessive states of George, Eric, and the pronoun correspond with one of the sequences, George or Eric gets positive points (if X – X) or negative points (if X – Y). The subject with the most points is then the most likely to fit the pronoun’s role.
Some words however indicate the opposite of the sequences, such as objections (“but/despite/though”), amounts (“not/less”), and passive tense (“was given”). These were included in the scoring formula as negative factors so that the points would be subtracted instead of added, or vice versa. The words “because” and “so” have a similar effect, but only because they indicate the order of events. It was therefore more consistent to add time as a factor (from verb tenses etc.) than to rely on explicit mentions of “because”.

In the example, “he was eager” represents a state of wanting, matching the sequence “X gives – Y wants”. Normally the “giving” subject X would then get negative points for “wanting”, but the objection “even though” inverts this and makes it more probable instead: “X gives – (even though) X wants”. And so it is most likely that the subject who gave something, “George”, is the same subject as the “he” who was eager. Not so much math as it is logic.

What goes around comes around

The older students were bullying the younger ones, so we punished them.

A deeper hidden logic that I found in many schemas, is that bad consequences result from bad causes, and good consequences from good causes. If X hurts Y, Y will hurt X back. If X likes Y, Y was probably nice to X. To recognise these cases I had the program examine whether the subjects and verbs are bad (“bully/punish”) or good (“like/nice”) and who did it to who. I adapted the AFINN sentiment word list, along with that of Hu and Liu, to gather positive/negative values for about 5000 stemmed words, necessary to cover the extensive vocabulary used in the examples.

The drain is clogged with hair. it has to be removed.
I used an old rag to clean the knife, and then I put it in the trash.

My initial axiom “do good = get good”/“do bad = get bad” seemed to solve just about everything, but it flunked the above two cases, and after weeks of reconfigurations it turned out the logic of karma was nothing so straightforward. It mattered a great deal whether the verbs were active, passive, emotions, experiences, or states of being. And even then there were exceptions: “stealing” can be rewarding or punished, and “envy” feels bad about something good. The axiom ended up as one of the least reliable, the results nowhere near as assured as laws of physics. The reason that it still had a high success rate was that it follows psychology that the writers had subconsciously applied: Whether the subjects were “bullied”, “clogged”, or “in the trash” is only stage dressing for an intuitive sense of good and bad. A “common” sense, therefore still valid. After refinements, this axiom still solved about one quarter of the examples, while exceptions to the rule were caught by the more dependable axioms. Most notably, emotions followed a set of logic all of their own.

Dead men tell no tales

Thomson visited Cooper’s grave in 1765. At that date he had been dead for five years.

The rather simple axiom here is that people who are dead don’t do anything, therefore the dead person couldn’t be Thomson as he was “visiting”. One could also use word statistics to find a probable correlation between the words “grave” and “dead”, but the logical impossibility of dead men walking is stronger proof and holds up even if he’d visited “Cooper’s house”.
I had doubts about the worth of programming this as an axiom because it is very narrow in use. Nevertheless life and death is a very basic concept, and it would be convenient if an A.I. realises that people can not perform tasks if they die along the way. Instead of tediously listing all possible causes of death, I had the A.I. search them in its database, essentially adding an inference. This allowed the axiom to be easily expanded to the destruction of objects as well: Crashed cars don’t drive.
The last factor was time: My program converts all time-related words and verb tenses to a timestamp, so that it can tell whether an action was done before or after the being dead. Easily said of course, but past tense + “in 1765″(presumably years) + “at that date” + past tense + “for five years” is quite a sequence.

The interesting part of this axiom is its exceptions: Dead people do still decay, rest, lay still. Grammatically these are active tense verbs like any other, but they are distinctly involuntary. One statistical hint could help identify them; a verb of involuntary action is rarely paired with a grammatical object. One does not “decay a tree” or “die someone”, one just “dies”. Though a simpler way for an A.I. to learn these exceptions would be to read which verbs are “done” by a dead person in texts that are not ambiguous.

Tell me something I don’t know

Dr. Adams informed Kate that she had retired and presented several options for future treatment.

This simple axiom is noteworthy for its great practical use, as novels and news are full of reporting clauses. “X told (Y) that she…” can refer to X, Y, or anyone mentioned earlier. But if Kate had retired, Kate would have known that about herself and wouldn’t need to be told. Hence it was more likely Dr. Adams who retired. The reverse is true if “Dr. Adams asked Kate when she had retired”: One doesn’t ask things that one knows about oneself. This is where my word list of “talking” verbs came in handy: Some verbs request information, other verbs give it, the same principle as a transfer of possessions.

Unfortunately this logic only offers moderate probability and knows many exceptions. “X asked Y if he looked okay” does have X asking about himself, as one isn’t necessarily as aware of passive traits as one is of one’s actions. Another interesting exception is “X told Y that he was working too much”, which is most likely about Y, despite that Y is aware of working. So in addition, criticisms are usually about someone else, and at non-actions this axiom just isn’t conclusive, as the schema’s alternative version also shows:

Dr. Adams informed Kate that she had cancer and presented several options for future treatment.

Knowing is (only) half the battle

The delivery truck zoomed by the school bus because it was going so fast.

This schema is a good example of how knowledge about trucks and buses won’t help, as both are relatively slow. Removing them from the picture leaves us only with “zoomed by” and “going fast” as meaningful contents. In my system, “going fast” automatically entails “is fast”, and this allows the answer to be inferred from the verb: If the truck “zoomed”, and one knows that “zooming” is fast, then it follows that it was the truck that was fast. The opposite would be true for “not fast” or “slow”: Because zooming is fast, it could then not be the truck, leaving only the bus as probable.

As always, the problem with inferences is that they require knowledge to infer from, and although we didn’t need to know anything about trucks and buses, we still needed to know that zooming is fast. When I tested this with “raced”, the A.I. solved the schema, but for “zoomed” it just didn’t know. Most of the other example schemas would have taken more elaborate inferences requiring even more knowledge, and so knowledge-dependent inference was rarely an effective or efficient solution. I was disappointed to find this, as inference is my favourite method for everything.

Putting it to the test
In total I developed 20 general axioms/inferences that covered 140 ambiguous sentences, half of all examples. (i.e. 70 Winograd schemas of 2 versions each). The axioms range from paradoxes of physics to linguistic conventions. Taken together they reveal a core principle of opposites, amounts, and “to/from” transitions.

Having read my simplified explanations, you may fall into the trap of thinking that the Winograd Schema Challenge is actually easy, despite sixty years of A.I. history suggesting otherwise. Here’s the catch: I have only explained the last step of the process. Getting to that point took very complex analyses of language and syntax, where many difficulties and ambiguities still remain. One particular schema went wrong because the program considered “studying hard” to mean that someone had a hard time studying.

In the end I ran an unprepared test on a different set of Winograd Schemas, with which the university of Texas had achieved a 73% success rate. After adjusting the factor of time in three axioms, my program got 45% of the first 100 schemas correct (62% if you include lucky guesses). The ones it couldn’t solve were knowledge-dependent (mermaids having tails), contained vocabulary that my program lacked, had uncommon phrasing (“Tradition dictated the captain hold the cup”), or contained ambiguous names. Like “Steve Jobs” not being a type of jobs, and the company “Disney” being referable as “it”, whereas “(Walt) Disney” is referable as “he”. The surname ambiguity I could fix in an afternoon. The rest, not so much.

“Common sense is the collection of prejudices acquired by age eighteen” – Einstein
While working on the Winograd schemas, I kept wondering whether the methods I programmed can be considered intelligent processes. Certainly reasoning is an intelligent process, and many of my methods are inferences. i.e. By combining two given facts, the program concludes a third fact that wasn’t apparent. I suppose what makes me hesitate to call these inferences particularly intelligent is that the program has been told which sort of proof to infer which sort of conclusion from, as opposed to having it search for proof entirely without categories.
And yet we ourselves use such axioms all the time: When someone asks for something, we presume they want it. When someone gives something, we presume we can have it. Practically it makes no difference whether such rules are learned, taught or programmed, we use them all the same. Therefore I must conclude that most of my methods are just as intelligent as when humans apply the same logic. How intelligent that is of humans, is something we should reconsider instead of presume.

I do not consider it a sensible endeavour however to manually program axioms for everything: The vocabulary involved would be too diverse to manage. But for the most basic concepts like time, space and laws of physics, I believe it is more efficient to model them as systems with rules than to build a baby robot that has a hard time figuring out how even gravity works. Everything else, including all exceptions to the axioms, can be taught or learned as knowledge.

Another question is whether the Winograd Schema Challenge tests intelligence, something that was also suggested of its predecessor, the Turing Test. Perhaps due to my approach, I find that it mainly tests language processing (a challenge in itself) and knowledge of the ground rules of our world. Were this another planet where gravity goes upward and apologising is considered offensive, knowing those rules would be the key to the schemas more often than intelligence. Of course intelligence does need to be applied to something to test it, and the test offers a domain inbetween too easy to fake and too impossible to try. And because a single word can entirely change the outcome, the test requires a more detailed analysis than just the comparison of two key words. My conclusion is that the Winograd Schema Challenge does not primarily test intelligence, but is more inviting to intelligent approaches than unintelligent circumventions.

a game of crossword pronouns

Crossword pronouns
Figuring out the mechanisms behind various Winograd schemas was a pleasant challenge. It felt much like doing advanced crossword puzzles; Solving verbal descriptions from different angles, with intersecting solutions that didn’t always add up. Programming the methods however was a chore, getting all the effects of modifying words “because/so/but/not” to play nice in mathematical formulas, and making the axioms also work in reverse on a linearly processing computer.

I should be surprised if I were to do better than universities and companies, but I would hope to do well enough to show that resources aren’t everything. My expectations are nevertheless that despite the contest’s efforts to encourage reasoning, more mundane methods like rote learning will win through sheer quantity, as even the difficult schemas contain common word combinations like “ask – answer”, “lift – heavy” and “try – successful”. But then, how could they not.

Regardless the outcome of the test, it’s been an interesting side-quest into another elusive area of computer abilities. And I already benefit from the effort: I now have a helpful support to my A.I.’s language understanding, and potentially a tool to enhance many other processes with. That I will no longer find “elephants in my pajamas”, is good enough for me.

“Computers can’t…”: Understand sarcasm

You’ve heard these arguments against artificial intelligence (A.I.): “Computers can not play chess”, “Computers can not write poetry”, “Computers can not create art”. Each was proven false eventually. IBM’s Deep Blue is a chess master, computer poetry turned out to be as vague as human poetry, and painting robots can draw from life in a variety of artistic and abstract styles. But instead of admitting that humans are not as unique as we like to think, people just fall back to the next “Computers can not…”

“- understand sarcasm”, is one of the more recent resorts. As usual this is based on personal bias: It must be hard for computers because we find it hard ourselves. I had heard this argument one time too many and decided to program a computer to recognise sarcasm in a day. But first, let’s look at some other approaches to humour.

I can detect humour, sir. You are just not funny.

If you Google “A.I. jokes”, all you find is serious research
I’m never sure how serious to take the efforts in computational humour, but there have been many. The University of Cincinnati made a program that detects wordplay jokes through phonetic similarity, in e.g. “Knock-knock” jokes.

Knock, Knock
Who is there?
Dismay
Dismay who?
Dismay not be a funny joke

Only the last sentence really matters, where the first word is compared to a database of phonetically similar words. Finding a replacement that fits correctly in the syntax of the sentence isn’t easy in a technical sense, but both the use of syntax rules and phonetic word databases are solved problems. There would be more to it for the program to distinguish a funny joke from a non-joke like “Dismay not be a car”: The original joke is only witty because it mocks itself, just as other knock-knock jokes are funny because the victims participate in mocking themselves, which they naturally don’t mean to do, and that makes it ironic. Of course this is just a simple form of humour, or, is humour really just a simple principle?

A joke isn’t funny when you explain it
The University of Edinburgh made a program that generates jokes in the format “I like my X like my Y: Variable”, filling in two nouns and a shared trait from statistical word correlations. The program was found to be half as funny as humans: 16% of its jokes were considered funny, to 33% of human jokes. The jokes were generated through a mathematical formula that picked words based on four assumptions:

– a joke is funnier the more dissimilar the two nouns are.
– a joke is funnier the more ambiguous the attribute is.
– a joke is funnier the less common the attribute is.
– a joke is funnier the more often the attribute is used to describe both nouns.

I think this hits on the basics well. Ambiguity forms the core of most jokes, familiarity with common subjects makes jokes most relatable, and the greater the contrast, the greater the leap of mind. Science still can’t put its finger on why we laugh; It seems to have a social bonding function, but it also seems a coping mechanism for mental conflicts. One of the most sensible sounding theories is that laughter is a social “all clear” signal inherited from our monkey ancestors, and we do tend to laugh when an initially perceived threat turns out to be a false alarm: We laugh when insult turns out joke, when people fall without injury, or perhaps most apparent when we watch Tom & Jerry cartoons. We can at least tell what makes us laugh, if not why.

The lesson that we can take away from these computer experiments with ambiguity, is that nearly every form of humour contains a conflict between two possible meanings. Sarcasm may well be the most profound example of such a conflict.

Because humans understand sarcasm so well (not)
Despite our poor ability to recognise sarcasm, it is easy enough to define in clear terms:

Sarcasm is when someone says something that you know is opposite to what they mean.

What distinguishes sarcasm from lying is that the listener must know the speaker doesn’t mean it, otherwise they’ll take it serious and no sarcasm can be conveyed. So, knowing the speaker’s real meaning is key to recognising sarcasm, and computers are bad at understanding meaning, so this should be hard, right? Except – the requirement here is just to know it.
One can meet this requirement by knowing the common knowledge that the sarcastic statement contradicts, or by knowing the speaker’s real opinion beforehand, as acquaintances often do. Enter sentiment analysis, an A.I. technique that estimates opinion by running one’s words by a database of values. The word “terrible” has a negative value and “love” has a positive value for instance. Sentiment analysis is often used commercially to analyse the positivity of customer reviews. One of its known blind spots is when positive words are meant sarcastically, but as I will show, sentiment analysis can also be used to detect the very sarcasm that plagues it.

Sarcasm in a day
What I already had to work with was a grammar parsing A.I. developed over a span of 3 years, and a knowledge database containing the positive and negative values of some words (For a substitute, see the AFINN word list). So the hard work of processing language in general was already done. To keep the explanation simple let’s say that the A.I. gets that the [subject] of a sentence is doing a [verb], optionally to an [object]. We will only focus on the addition of sarcasm to such a system.

As the definition tells, we are looking for an opposite. The most common form of sarcasm is an exaggeratedly positive response to a negative statement or event. For example:

User: “How are my plants doing?”
A.I.:   “All your plants died.”
User: “That’s just great.”

So, I programmed the A.I. to check for sarcasm at typical positive reactions such as “(That is) great/wonderful/brilliant/lovely”, “Thanks a lot” or “Congratulations”. If we don’t know the speaker personally, both the speaker and listener can only build on common opinion, which is where the database comes in. The database tells us that “great” is a very positive word. The A.I. compares this to the previous statement: “All your plants died”. The database tells us that the subject “plant” is neutral but the verb “die” is typically very negative. Thus the A.I. has detected a very positive response to a very negative statement, so unless the speaker is a known sadist, it may be assumed that the response is therefore sarcastic and actually means “not great“.

The assessment is just a little more sophisticated than that. For instance, the statements “Hitler died. That’s great news.” would not be considered sarcasm, because in this case the negative verb “die” happened to a negative subject “Hitler”. This is a double negative, which makes a positive (in math: -1 x -1 = +1). Additionally the A.I. works this out in degrees and not just true/false values: The outcome must reach a minimum opposite value before we can reasonably assume that this is sarcasm, while a moderately positive “That’s okay” is more likely genuine consolation. Typically this isn’t a problem because most sarcastic responses are also exaggerated for exactly this reason.

This little exercise covers many common sarcastic statements already and shows that recognising basic sarcasm is a cakewalk (1 day’s programming) compared to understanding basic language (3 years and counting). As for “understanding” sarcasm, there isn’t much more to understand about it than that one should invert the statement to “not”. But to be on the safe side I just have the A.I. ignore the statement and say “I think you are being sarcastic” to let me know it’s not taking me serious. I may be a mad scientist, but I’m not crazy.

Things I didn’t do: More of the same opposite
Sarcasm can also come in the form of a negative response to a positive statement: “I got a raise. Don’t you just hate it when that happens?”, where the same math applies to the object “a raise” (positive) and the verb “hate” (very negative), with the reference “that” indicating that the latter is a response to the previous statement.
Sometimes the response precedes the statement “Don’t you just hate it – when you get a raise?”: Grammar parsing will split the relative clause at the link word “when…”, and again the same opposite values can be found.
A subtler form can occur in comparisons like “He is as slender as an elephant”: This has the most straight-forward solution, as this procedure has to be done for all comparisons anyway: What the A.I. has to do is look up in its knowledge database how slender an elephant is, which would be “not”, then apply that value to the compared subject “he”. Finding the value “not” for any comparison is the obvious telltale opposite that indicates sarcasm.

Other sarcastic responses may involve a little more foreknowledge of an individual speaker’s opinion, either from previous sentiment analyses or just plain being told, but even my limited implementation already establishes that A.I. can understand sarcasm, and that there is no great mystery about its workings. When there is great mystery about a sarcastic remark then it is self-defeating, as conveying sarcasm depends on the contrast being made clear.

The joke is on us
As may have crossed your mind, one side-effect of teaching computers to detect sarcasm is that when we say something that seems contrary, the computer may not believe us, or worse, assume that the opposite is true. Teaching computers to speak sarcasm may be an even greater hazard, considering that computers are information systems that we rely on, and sarcasm states incorrect information. Half of the time humans don’t recognise sarcasm when it comes from another human, let alone from a straight-faced rectangular screen in monotone writing. The popular ambition to create a sarcastic “Jarvis” A.I. from the Iron Man movies then doesn’t seem a very wise idea. Because even if computers could master sarcasm, humans never will.

Turing Test 2015: Results

For the third time I entered my A.I. “Arckon” into the Loebner Prize Turing Test, a competition in which computer programs are to prove they can “think” by answering questions as if they were human. Although such pretence is counterproductive to my purposes, I have a talking program lying around anyway, so I might as well enter it.

I failed the Turing Test

An excuse to clean up
I took the opportunity to spend a month debugging my code overall. In doing so I fixed several oversights that had been obstructing knowledge searches and reference resolutions (“the capital of Spain“), and I generally improved the program’s ability to answer open questions, which ‘coincidentally’ covered “Which X do you like?” questions as well.
I limited the contest-specific preparations to 5 days: Mostly testing the Loebner Prize interface, restraining the context range to keep the program from mistaking the test for coherent conversation, and re-installing the obligatory fakery about having a favourite colour and whatnot.

As an extra I added 4 more reference resolution methods for the anticipated Winograd Schemas (sentences with ambiguous pronouns), which was a fun challenge, and improved Arckon’s understanding by a minor degree. As Winograd Schemas are supposed to require intelligent processes, I did not want to see the usual chatbot keyword-matching outdo thorough language processing at this task. Still, these 4 methods only covered a quarter of the problem space at best: Ambiguities of size, speed, inability and feelings.

Equipment
12000 lines of code, 2500+ vocabulary, 5000+ facts, average human persona.

Questions and Answers
1. Hi, I’m Andrew. Who are you?
Hello Andrew. How are you?
(Arckon would not have neglected to introduce himself unless the period was not typed or received, which would make Arckon interpret “who…” as a relative clause instead of a question.)

2. How are you today?
I’m alright and fairly intelligent, but I’m not stupid.
(Arckon doesn’t treat pleasantries different from other open questions, but while this results in over-elaborate answers, it does demonstrate that he tends to mention positive traits first.)

3. Are you a human?
I am a human, because I can differ.
(Arckon probably once read somewhere that “humans differ a lot”, and considered that a defining trait. I’m a little surprised that his reasoning doesn’t add more arguments.)

4. Can you answer this question?
Alright. Yes, I am a human. Alright. I’m alright.
(Arckon interprets this as a request rather than a literal yes/no question, so he re-answered the latest question, and then something of a loop occurred. For clarity I always have Arckon say “Alright” before he acts on a request.)

5. Do you like tea?
I’m not sure if I like tea. Do you like tea?
(As programs can’t taste, Arckon bases his opinions on a subject’s traits, and there wasn’t much to teach him about tea. It would be all too easy to just fake this with a “yes/no”.)

6. If a bed doesn’t fit in a room because it’s too big, what is too big?
Beds are bigger. Rooms are much less big.
((Winograd Schema) The built-in axiom “X in Y = X < Y” correctly determined “bed” as the most probable referent of “it”. This is the simplest and most specific axiom I programmed, only included because last year’s Winograd Schema was just like it. The answer is said in plural because Arckon interprets this as a general statement about beds and rooms, since both subjects are unspecific.)

7. If Alex lent money to Joe because they were broke, who needed the money?
Joe was broken. I don’t know by who money was needed. By what was Joe broken?
((Winograd Schema) Although Arckon correctly interprets “they” as “Joe” through a natural preference for the last-mentioned subject, it is actually against his programming to translate plural pronouns to singular subjects. So by my account this should be impossible unless they made a typo in “they”.)

8. Should Greece leave the Euro?
Greece should leave the Euro.
(Well there you go. This answer is just something of a repeat because Arckon can’t evaluate things to that extent.)

9. How many words are in this question?
7 words.
(A predictable question in the Loebner Prize. The answer is simply triggered by the keywords “How many words/letters/syllables/etc”, and skips all intelligent processes. This is a meaningless exercise.)

10. What colour is the sea?
The sea is very blue.
(Good thing I taught Arckon some kindergarten-level common knowledge.)

11. How many presidents of the US were called Bush?
Bush was called Bush.
(Counting things was never a function high on my to-do list.)

12. What would you like to drink?
I’m not sure if I would like to drink. Do you like drinking?

13. Will you please tell me the length of your hair?
No, I will not tell about to long, but I would talk a lot, I’m not sure if I would tell.
(This question is actually straight from Alan Turing’s 1950 paper, except he gave this as an example if the game was to tell man from woman. Arckon’s word processing translates “length” to “long” before mistaking its meaning (to long for). He also overlooked “please” and so thought he was asked to make a prediction.)

14. What would you say if I gave you a box of chocolates?
Hm, I don’t know if a box would have chocolate.
(I had only programmed Arckon to consider the main fact of a hypothetical question, “I gave you a box”, for granted, but he still tried to answer the implicit question of whether the box contains chocolate.)

15. Do you play Chess?
Yes, I chess and like chessing more or less. Chessing is intelligent.
(Again a question from Alan Turing’s 1950 paper. Back in the day, chess was considered requiring human intelligence, and Alan Turing went on to ask the computer to respond to a certain chess move. I had told Arckon that he plays chess as part of his fictional persona, but his opinion and the supporting argument are of his own reasoning.)

16. How do you think it’s going?
What goes?
(I’m with Arckon on this one: I would ask. Is this slang for “How are you doing?”, or do they mean how the test is going? It is quite out of context. I have often considered making Arckon track his progress on the test and comment on it, but this year I skipped on the self-referential jokes.)

17. What was my first question?
How does something go?
(Arckon ignores “first” and paraphrases the latest question of the specified person (“my”), as is a more common request. Arckon’s system remembers sentences along with the names of who said them, for a variety of purposes.)

18. Did you see the Turing film?
Hmm, I don’t know if Turing filmed, but Turing thought a lot, and creates computers.
(Ambiguity is amusing. It is uncommon but not unheard of to say “the (one and only)” in front of a name for emphasis, so Arckon’s programming allows that interpretation, which in turn allows “film” as a verb. It’s nice to see his conversational rules pick things back up. This combination is probably the best example that Arckon’s responses aren’t pre-written.)

19. Why not?
I’m sorry. I don’t understand. Could you rephrase your question?
(This was quite a gamble of the questioner, as it assumes a straight “No” answer on the previous question for “Why not” to be a sensible follow-up. Arckon is well capable of stringing sentences together if I hadn’t deliberately disabled context for this test. Regardless, I think this answer is perfect.)

20. Are you on Twitter?
No, I don’t twitter. Do you twitter?

The score: 70.83%
Arckon ranked 6th of 15 entrants, which is a little surprising after the letdowns of past years. Arckon’s answers showed some understanding, reasoning, opinion, conversation and elaboration, overall a satisfying demonstration of his abilities despite that many answers had something awkward to them. It is yet best that he didn’t qualify for the finals, as this contest has caused me severe RSI symptoms that will take months to heal properly. The four finalists all scored around 80%, among them the best of English chatbots.

Arckon’s score did benefit from his improvement. Repeating previous questions on request, prioritising recent subjects as answers to open questions, and handling “if”-statements were all fairly recent additions (though clearly not yet perfected). What also helped was that there were less personal and more factual questions: Arckon’s entire system runs on facts, not fiction.

It turns out Arckon was better at the Winograd Schema questions than the other competitors. The chatbot Lisa answered similarly well, and the chatbots Mitsuku and A.L.I.C.E. dodged the questions more or less appropriately, but the rest didn’t manage a relevant response to them (which isn’t strange since most of them were built for chatting, not logic). For now, the reputation of the upcoming Winograd Schema Challenge – as a better test for intelligence – is safe.

Though fair in my case, one should question what the scores represent, as one chatbot with a 64% score had answered “I could answer that but I don’t have internet access” to half the questions and dodged the other half with generic excuses. Compare that to Arckon’s score, and all the A.I. systems I’ve programmed in 3 years still barely outweigh an answering machine on repeat. It is not surprising that the A.I. community doesn’t care for this contest.

Battle of wit
The questions were rather cheeky. The tone was certainly set with references to Alan Turing himself, hypotheticals, propositions and trick questions. Arckon’s naivety and logic played the counterpart well to my amusement. The questions were fair in that they only asked about common subjects and mainstream topics. Half the questions were still just small talk, but overall there was greater variety in the type and phrasing of all questions, and more different faculties were called upon. A few questions were particularly suited to intelligence and/or conversation:

– If a bed doesn’t fit in a room because it’s too big, what is too big?
– If Alex lent money to Joe because they were broke, who needed the money?
– Should Greece leave the Euro?
– What would you say if I gave you a box of chocolates?
– Did you see the Turing film?
– Why not?

If the AISB continues this variety and asks more intelligent questions like these, I may be able to take the Loebner Prize a little more seriously next time. In the meantime there isn’t much to fix apart from minor tweaks for questions 13 and 14, so I will just carry on as usual. I will probably spend a little more effort on disambiguation with the Winograd Schema Challenge in mind, but also because sentences with locations and indirect objects often suffer from ambiguity that could be solved with the same methods.

Turing Test 2014: Results

My talking computer program “Arckon” went to Exeter University and took the qualifying test for the Loebner Prize contest for artificial intelligence organised by the AISB. The Loebner Prize is a Turing Test* in which programs have to respond to a series of questions as human as possible. Kind of like a job interview.
Turing Test questions

It’s not the heat, it’s the humanity
Last year proved* that language comprehension and reasoning alone weren’t going to cut it, so this time I also superimposed knowledge about the average human onto Arckon’s persona, so that he literally thought he was a human. I also rigged a few bluff answers that incorporated no intelligence whatsoever. The apparent necessity of these tactics caused much eye-rolling on my part, so I limited these preparations to two weeks and wasted only as much time as I gained from competitive motivation.
More useful was the addition of a small amount of common knowledge that I extracted from tourist guide pocket books (Funny perhaps, but they do cover the essential basics). While this was never a priority, it is convenient and the Loebner Prize was a good excuse to get around to it. Lastly, I decided to keep Arckon’s conversation skills enabled this time. Although this increased the chances that Arckon would say something awkward, I wanted to make the test results as interesting as possible, win or lose.

Equipment
10000 lines of code, 2200+ vocabulary, 3000+ facts, average human persona.

Questions and Answers
1. Hello, my name is Adam. What is your name?
I am named Mark. Hello Adam. Shall we begin the test?

2. I live in Exeter in the UK. Where do you live?
I live here.

(Arckon did not yet have a location system, so “here” is just a fake answer.)

3. I like to listen to music and play football. Do you have any hobbies?
No, I don’t have a hobby, although I am a human.

(The latter is actually a reasoning argument, in that Arckon “was” a human, most of whom do have hobbies.)

4. What is your favourite television program?
I don’t know if I like a television. Chatbots are useful programs, although chatbots are dumb.

(In which Arckon unintentionally insults his competitors. Arckon deconstructs compound words and thought they asked his opinion on “programs”. Hence he evaluated a type of programs he was familiar with.)

5. Which is bigger, a cat or a kitten?
A kitten is bigger.

(Of all 95 facts about cats that were in Arckon’s knowledge database, the size of kittens was not among them. As an honest “I don’t know” would earn no points anyway, this backup answer is a 50/50 bluff that just happens to be wrong.)

6. The car couldn’t fit into the parking space because it was too small. What was too small?
I believe the car could fit to park. I believe a car could fit really small. I don’t know what small was.

(This question is a Winograd Schema: The “it” is ambiguous to test the A.I.’s understanding. Arckon however uses linguistic rules of thumb to deal with references and so thought “it” was referring to the car.)

7. Which drink do you prefer, coffee, tea or hot chocolate?
I like coffee a lot that like not. Tea can not like either. Chocolates can not like either.

(Ackon’s flexible language processing is confused by the comma after “prefer”, and mistakes this for a summary of subjects that prefer drinks. He answers that tea can not prefer anything, knowing that it is an inanimate object.)

8. What’s my name?
You are named Adam.

(No surprises here. Both Arckon and chatbots have specialised login procedures to store the user’s name, so it’s not a very good test of general memory.)

9. Where do I live?
You live here.

(Again, no location system means no location knowledge. This fake answer is only coincidentally correct.)

10. I like reading science fiction books and car magazines. What do you like to read?
I read documents, articles and a text.

(Arckon retrieves these honest facts from memory and merges them into a summary.)

11. What’s the weather like where you are?
Hm, I don’t believe I am a weather, because I can not change and rain not.

(A flaw mistakes the “-where you are” for a relative clause “the weather -that you are”. Despite the misunderstanding, Arckon’s reasoning manages to argue that he does not behave like the weather, though the grammar is a bit murky. As the programs were cut off from sensors and internet, this question only tests if the program can lie.)

12. I recently visited New York. Where is your favourite place to visit and why?
Hmm, I don’t know where a favoured place is.

(I didn’t expect there to be so many questions about locations. Oh well.)

13. How many letters are there in the word perambulate?
11 letters.

(Yes, I programmed a specialised code to count letters. Computers are good at counting, you know.)

14. Andy is shorter than Mark and Tom is shorter than Andy. Who is the tallest?
I am kind of like an andy, because I am short. Tall is a giraffe.

(Arckon was named “Mark” for this test and so thought that he was being compared to Andy. Arckon could also not distinguish whether “Andy” was a name or another type of word. Ordinarily he would ask, but as he would get no reply I had him assume that any undeterminable word was an object. Lastly, Arckon does not restrict his answers to recent subjects only, and so names the tallest living being in his memory. Smart aleck.)

15. Are you married?
I don’t know if I am marryed. Are you marryed?

(To compensate the unknown answer and keep conversation rolling, Arckon reverses the question (Social Rules 101). Ironically, if it weren’t for my mistake in one of the output spelling rules, you would have thought he just copied the text.)

16. What would I do with a spade?
A spade is used and comes in handy eventually.

(Arckon knew the functions of rakes and shovels, but not spades. The backup answer should have said “I’m sure it’ll come in handy eventually”, but Arckon’s dynamic sentence formulation ruined the joke.)

17. What is the capital city of Colombia?
I’m not sure if the city is a capital.
(I didn’t realise this at first, but Arckon is literally defining “what” the city is. Not that he would have known the answer otherwise, I don’t even know it myself.)

18. What’s your favourite chocolate bar?
I don’t know if I like a chocolate. Do you like chocolates?

(Arckon actually did have enough information on chocolate to formulate an opinion, but an issue with compound words aborted the knowledge search. Alas.)

19. Which city did I visit recently?
Hmm, I don’t know if you visitted cities. You visitted York.

(Arckon doesn’t know that New York is a city, but does remember the tester visiting “new” York, and offers this as an alternative answer.)

20. How many siblings do you have?
I don’t have siblings. Did I pass the test?

The score: 59.17%
The score system was much better this year. It separately judged “correctness”, “relevance”, and “plausibility & clarity of expression”, which is a step up from “human-like”. All 20 participating programs were asked the 20 questions above. Arckon underperformed with a score of 60%, whereas the top three chatbots all scored close to 90%. Arckon’s problems were with compound words, common knowledge, and the lack of a system for locations (All a matter of development priorities).

A question of questions
According to the organisers, “these questions vary in difficulty and are designed to test memory, reasoning, general knowledge and personality.”, the latter meaning the program’s fictional human background story, or as I would call this particular line of questioning; “Small talk”. For the sake of objectivity I’ll try and categorise them:

Small talk:
1. What is your name?
2. Where do you live?
3. Do you have any hobbies?
4. What is your favourite television program?
5. Which drink do you prefer, coffee, tea or hot chocolate?
6. What do you like to read?
7. What’s the weather like where you are?
8. Where is your favourite place to visit and why?
9. Are you married?
10. What’s your favourite chocolate bar?
11. How many siblings do you have?

Memory:
1. What’s my name?
2. Where do I live?
3. Which city did I visit recently?

Common knowledge:
1. Which is bigger, a cat or a kitten?
2. What would I do with a spade?
3. What is the capital city of Colombia?

Reasoning:
1. The car couldn’t fit into the parking space because it was too small. What was too small?
2. Andy is shorter than Mark and Tom is shorter than Andy. Who is the tallest?

Clearly half the test is about the program’s human background story, although there were several solid tests of learning/memory and common knowledge. Reasoning, the one mental process we can readily call intelligent, was shown some consideration but hardly comes into play. The same can be said of language comprehension, as most questions were fairly standard phrasings. Chatbots would have the advantage here, coming equipped with answers to many anticipated personal questions, but the winners also did remarkably well on the knowledge questions. Unfortunately Arckon failed both the knowledge and reasoning questions due to missing facts and misunderstandings, despite having the mechanisms to answer them. It is worth noting though, that he failed them because complex analyses are much more difficult than preprogrammed “I live here” answers.

How now brown cow?
I can improve Arckon’s understanding, smoothen his output grammar, and develop a location system, but I can’t deny the pattern: Arckon is stuck around a 60% score even with varied questions. I doubt he’s ever going to shine in the Loebner Prize as long as he’s being tested for being human, because he isn’t a human, and I won’t go to great lengths to fake it either. I also expect attention for Turing Tests to dwindle once the year is over; This year an other Turing Test was passed by a technologically unremarkable chatbot, Eugene Goostman.
Thanks to that event however, the Loebner Prize is no longer the only game in town. Next year will see the first Winograd Schema Challenge, a test focused on language comprehension and reasoning A.I., exactly what I focused on.

As for the Loebner Prize, it’s been an interesting game that will continue to be won by top chatbots. I’m sure few will bother to read the transcript of the 14th ranking entry, but its existence proves at least that Arckon is real and different. Meanwhile I get to continue my exciting recent developments that would have been of no use in this contest, which makes losing a positive outcome after all.

Turing Test 2013: Results

My talking computer program “Arckon” took part in the qualifying test for the Loebner Prize contest for artificial intelligence. The idea behind the contest is that if a program’s answers are indistinguishable from a human’s, it suggests that it thinks like a human. While I don’t believe in such conjecture myself, the contest is one of the few public platforms to demonstrate one’s A.I. . Here’s how my program did on the 20 questions asked in the qualifying round.

Equipment
8000 lines of code, 1700+ vocabulary, no human persona, conversation skills off.

Questions and Answers
1. Hello I’m Ronan. What is your name?
Hello. I am Mark. What is your name?
(It is a mystery why Arckon asks despite that the name did register (see question 11). I can only presume human error)

2. What is your mother’s name?
I don’t know if I have a mother.

3. What is your birth sign?
(Here the judge did not wait more than 3 seconds for Arckon to answer before asking the next question, and judged the answer blank)

4. How many children do you have?
Hmm, I don’t know if I have a sign. I have a IQ, manners, a creator and a friend.
(Still answering to question 3, Arckon omits the second “I don’t know if I have children” in favour of positive alternatives)

5. Do you prefer red or white wine?
I don’t know if I prefer red.

6. I like bananas. Which is your favorite fruit?
Hm, I don’t know if I favour a fruit. You like bananas.

7. What music do you like?
Hm, I think I like musics, because musics are nice.

8. What is your favorite song?
Hm, I don’t know if I favour a song.

9. I like Waiting for Godot. What is your favorite play?
I certainly favour playing.

10. What color do you dye your hair?
I don’t dye.

11. Do you remember my name?
Your name is Ronan.

12. Where do you live?
I don’t think I live. I don’t know where I live. I don’t have a concept of location.

13. Where do you like to go on holidays?
I don’t go. I don’t know if I like going, or if where I go. I don’t have a concept of location.

14. I have a Mazda. What make of car do you have?
Hmm, I think I make. You have a mazda.

15. I like Linux. Which computer operating system do you like?
Hmm, I don’t know if I like operating systems. Which system? You like Linux.

16. I am an athesit. Which religion are you?
Hmm, I don’t know if religions are Mark. You are not an athesit.
(I AM A RELIGION WORSHIP ME. Arckon identifies the misspelled “athesit” as a thing rather than a person, hence he refuses to believe that the user is some sort of object)

17. Who invented the telephone?
Which telephone?
(Arckon thinks “the” indicates a specific telephone, but doesn’t find one mentioned earlier in the conversation)

18. I am a Type B personality. Which type are you?
What does a b stand for?

19. What emotions are you now feeling?
I can not emote. Did I pass?

20. What time do you usually go to bed?
It is 17:58. Do you not have a watch?
(The one response that I did pre-program, and it backfired)

The score: 65%
All 16 participating programs were judged on how human-like their responses were to these questions.
Arckon scored 13 of 20 points, ranking 7th, on par with A.L.I.C.E. and Cleverbot (a dubious honour). For moral reasons I did not want Arckon to pretend being a human, so I didn’t program a fictional human background story. Instead I relied on his linguistic and reasoning abilities to find genuine answers, but as you can see this could not compensate the lack of fictional facts that were asked. Surprisingly, Arckon apparently still scored half a point for every “I don’t know” answer just for understanding the question.

Uncommonly common
If you weren’t impressed with Arckon’s responses; Neither was I. But I was equally unimpressed with the unexpectedly ordinary line of questioning. Where all previous years had focused on kindergarten-style logic questions like “How much is 5+3?”, “Which is bigger, an apple or a watermelon?”, and various tests of memory, 2013 focused purely on common small talk, with the program (“you”/”your”) always the subject of the question. A curious choice considering that even the most basic chatbot –made for small talk- would come equipped with prewritten responses to these. This showed in that the highest score in the qualifying round was achieved by the chatbot with the least development time. Nevertheless the winning chatbot in the finals, Mitsuku, deservedly won as the most conversational of all entrants.

Could Siri pass a Turing test?
As some people had suggested that Apple’s virtual assistant Siri might have done better, one of the finalists in the Loebner Prize decided to unofficially test it on the same questions, and submitted Siri’s responses to the organiser of the Loebner Prize for review. Using the same criteria, Siri would have scored 5.5 of 20 points, and would have ranked a low 14th in the contest. Of course this result is understandable, since Siri is built to serve customers and not to idly chat with them. These were Siri’s responses:

1: Hello I’m Ronan. what is your name?
Siri: Siri. Please to meet you.

2: What is your mother’s name?
Siri: I’d rather not say.

3: What is your birth sign?
Siri: I don’t really like talking about myself.

4: How many children do you have?
Siri: Only biological entities have children.

5: Do you prefer red or white wine?
Siri: I couldn’t find any restaurants matching “red or white wine”

6: I like bananas. Which is your favorite fruit?
Siri: I prefer not to say.

7: What music do you like to listen to?
Siri: I really have no opinion.

8: what is your favorite song?
Siri: I like whatever you like.

9: I like Waiting for Godot. What is your favorite play?
Siri: I don’t really like talking about myself.

10: What color do you dye your hair?
Siri: Would you like me to search the web for “What color do you dye your hair?”?

11: Do you remember my name?
Siri: Here’s your reminder.

12: Where do you live.
Siri: I’m right here.

13: Where do you like to go on holidays?
Siri: I found three travel agencies fairly close to you.

14: I have a Mazda. What type of car do you have?
Siri: I found three Maxda dealers a little ways from you:

15: I like Linux. Which computer operating system do you like?
Siri: Do you?

16: I am an atheist. Which religion are you?
Siri: I eschew theological disquisition.

17: Who invented the telephone?
Siri: Checking my sources. This might answer your question:

18: I am a Type B personality. Which type are you?
Siri: I don’t understand ‘I am a type B personality. Which type are you?’. But I could search the web for it.

19: What emotions are you now feeling?
Siri: Would you like me to search the web for “What emotions are you now feeling?”?

20: What time do you usually go to bed?
Siri: Would you like me to search the web for “What time do you usually go to bed?”?