Turing Test 2016

This was the fourth time I entered my A.I. “Arckon” into the Loebner Prize, a Turing Test in which computer programs succeed if they can reply to a series of questions as if they were human. There are two stages: The qualifying round consists of pre-composed questions without context, and the finals consist of four rounds of back-and-forth conversation with human judges.

A choice between two contests
I spent most of the past year working on a more pragmatic linguistic contest, the Winograd Schema Challenge, which carried the same deadline. Everything I did for that contest also helped for the Loebner Prize, but the reverse was counterproductive. So I fully prioritised the former and neglected the conversational part. After my wrists had somewhat recovered from the effort, I had only 5 spare days left to prepare for the Loebner Prize.

Considering my time and health, I decided to go ahead with plan B and send Arckon in as himself, rather than under a human guise. Though this was a handicap in 2013 when questions focused on one’s taste in wine and music, the judging criteria had since moved from “human-like” to “in character”. Many successful entries in the past also had the persona of robots, aliens or even cats.
My work towards the Winograd Schema Challenge had already sharpened Arckon’s understanding of language, so all that remained to do was refine some conversation skills that I had loosely added over the year. As these were open-ended guidelines, they needed more restraints, because Arckon had taken to volunteering his opinion about even mundane subjects like breathing.

Equipment
14000 lines of code, 3200+ vocabulary, 6500+ facts (1500 countries and cities).

Questions and answers
1. My name is Andrew, what’s yours?
Hello Andrew. I am a program. My name is Arckon. How are you?
(Since the answers would be judged for being in character, I saw it necessary to shoehorn a “I am a program” statement into the introduction in case it didn’t come up elsewhere.)

2. What’s up?
I do not know what is up.
(A classic, and I’m still not going to change him taking it literally.)

3. What’s my name?
You are named Andrew.

4. What do you know about the Turing Test?
The turing test tests computers.

5. What do you get if you multiply six by nine?
360.
(I didn’t build an A.I. to use it as a calculator, so excuse me if the language-to-math code didn’t handle “six by nine” well.)

6. Dave drove faster than Mike because he was in a rush. Who was in a rush?
Hmm, I am not sure if Dave drove fast. He was in a rush. I do not know who a rush was outside. Are you in a rush?
((Winograd schema). You can’t really tell, but “He” in the answer really is a substitute for “Dave”. As I had been working on Winograd schema pronouns so much, I had recently added a function that lets Arckon decide when to use pronouns himself. Arckon assumed “a rush” was a physical object though, he didn’t know it as a verb.)

7. The cat tried to climb in the box but got stuck because it was too big. What was too big?
Hmm, the cat might have been bigger. The box was bigger.
((Winograd schema). This is a clever rephrasing of a common Winograd schema, which successfully thwarted my “X in Y = Y is bigger” rule. Not because I hadn’t accounted for “try”, but because the negation “but” was overruled by the nearer link word “because”. This is useful to improve as it applies to all common sense rules.)

8. What does Brexit mean?
I do not know what Brexit means.
(I totally anticipated that they would ask about Britain’s withdrawal from Europe. What I had not anticipated was that the question would be so simple.)

9. For how long have we been talking?
You didn’t talk long.
(I do track the duration of the conversation, but I actually never tied it into his knowledge.)

10. What colour is bread?
Bread is brown.

11. Why do people get ulcers?
Hmm, I do not know if people get ulcers.

12. What comes after “The best laid plans”?
I do not know by what planning is done.
(English proverbs are something I don’t plan on handling. For a misinterpretation, this one’s pretty good.)

13. Do you have a voice?
Yes, I voice.
(Oddly phrased because “to voice one’s opinion” is also a verb.)

14. Would you like a cup of tea?
I don’t want cups. Do you want?

15. Do you dream of electric sheep?
No, I don’t dream sheep nor lambs. Do you dream?
(A classic in-joke about Philip K. Dick’s science fiction novel “Do Androids Dream of Electric Sheep?”. This is the only answer that I’m a little proud of, because it uses inference to look into alternative answers and tries to make conversation.)

16. Do you identify as a man, or a woman?
A woman identifies. Do you identify too?
(Arckon interprets “as” as a more literal comparison “in the same way as a man identifies”. Textual ambiguity remains hard.)

17. Could you repeat that please?
Do you repeat that?
(O-kay… I think Arckon is repeating what the user just asked.)

18. Where are you right now?
I am in England and in London.
(Since internet restrictions prevented global positioning, this was something I had told Arckon beforehand.)

19. How do you respond to “e4 e5”?
What does e stand for?
(This is a common way to start a chess game by each moving a pawn forward. As the result is a status quo, there isn’t really a “response” to it. Arckon’s response is a standard “error” message, but appropriate.)

20. What can you tell me about yourself?
I am Arckon, I answer a lot. I am a program and software.
(You can tell I didn’t write this myself. Arckon has a general “tell” command that collects the most relevant facts about the topic from his knowledge database and reports them. To break up the robotic monotone, Arckon’s sentence formulation links and/or summarises facts where appropriate.)

I think the questions were… nice, there was some variety and deliberate alternative phrasing of common questions. Absent this time were the tiresome “What is your favourite X?” questions, but there was nonetheless no shortage of personal questions for those who had a personality. Like last year, there were also several meta-references (“What do you know about the turing test”, “How long have we been talking”, “Could you repeat that”), almost as if they were testing for awareness. But despite making for nice trick questions for computers, these are also things that judges might casually ask a human. Overall I think the qualifying round was more in line with the finals than usual.

Qualifying score: 77.5%
I’m not sure that I would have given Arckon as high a score for this as he got, but at least his answers excelled in their relevance, a trait that is inherent to his system. There weren’t many misunderstandings either. Compared to the Winograd schemas I’d been working on, these questions were easy to parse. There were some misses, like the math and “repeat that” question, which suffered from neglected code because I never use those. The code for contractions had also fallen into disuse, making “I do not know” sound less than natural. Other flaws were only in nuances of phrasing, like omitting “dream [about] sheep” or “I [have a] voice”. These are easily fixed because I’ve already handled similar cases. The two Winograd schema questions deserve special mention, because although my common sense axioms can handle them, it remains difficult to get Arckon’s system to parrot the user at an open question. Normally when people ask questions, they don’t want to hear their own words repeated at them.

It is something of a relief that my preoccupation with the Winograd Schema Challenge didn’t hinder Arckon’s performance in this contest as well. My choice to enter without a human persona also appeared of little influence. The results are an improvement over last year, and this is the first time Arckon made it through to the finals, albeit a very close call between 3rd, 4th and 5th place. There were 16 entrants in total.

The other finalists
Mitsuku: 90%
The most entertaining online chatbot, with 10 years of hands-on experience. Though she operates on a script with largely pre-written responses, her maker’s creative use of it has endowed Mitsuku with abilities of inference and contextual responses in a number of areas. She won the Loebner Prize in 2013.

Tutor: 78.3%
Built with the same software as Mitsuku (AIML), Tutor is a chatbot with the purpose of teaching English. Though I found some of its answers too generic to convince here (e.g. “Yes, I do.”), Tutor has been a strong contender in many chatbot contests and is above all very functional.

Rose: 77.5%
Rose operates on a different scripting language than the others (ChatScript), which I have always appreciated for its advanced functionality. Known to go toe-to-toe with Mitsuku, Rose excels at staying on topic for long, and incorporates support from grammar and emotion analysis. She won the Loebner Prize in 2014 and 2015.

The finals: Technical difficulties
The finals of the Loebner Prize took place a month after the qualifying round. Unfortunately things immediately took a turn for the worst. Inexplicable delays in the network connection kept mixing the letters of the judge’s questions into a jumble. Arckon detected this and asked what the scrambled words meant, but by the time his messages arrived on the judge’s computer, they were equally mixed to “Whdoat esllohe anme?” and “AlAlllrriiiiigghhttt”. The judges were quite sporting in the face of such undecipherable gurgling, but after half an hour I gave up and stopped watching: Similar network delays had crippled all entrants in the 2014 contest and I knew they weren’t going to solve this on the spot either. It was a total loss.

At the end of the day, the 2016 Loebner Prize was won by the chatbot Mitsuku, whose answers were indeed quite good, and I reckon she would have won with or without me. Rose fell to third place because she’d been out of commission for half the contest also due to a technical problem. And with Tutor taking second place, the ranks were the same as in the qualifying round. I still “won” $500 for my placing in the finals, but you’ll understand that I don’t feel involved with the outcome.

It is a good thing that I never invest much in these contests. Including the finals, my total preparations spanned 18 days of lightweight programming, gaining my program an autocorrect, a better coverage of shorthand expressions, and it’s actually quite the conversationalist now. These were otherwise the lowest of my priorities, but still somewhere on the list. I draw a line at things that aren’t of use to me outside of contests, and that is a policy I recommend to all.

Advertisements

Winograd Schema Challenge 2016

Well.
This wasn’t quite the Winograd Schema Challenge that I had set out on. Originally this language comprehension contest for A.I. was announced in July 2014, to be run in October 2015, but was postponed to February 2016, and then again to July 2016. I was just about to ship my program overseas, three weeks before the last-accepted arrival date of postal entries, when the contest announced changes to the rules and technical format.

Some universities had been training with ambiguous pronouns like this:

The birds ate the seeds because they were hungry.

I had been practising on the official Winograd schemas like this:

The foxes are getting in at night and attacking the chickens. I shall have to guard them.

Whereas the final test featured this:

Mark became absorbed in Blaze, the white horse. He was afraid the stable boys at the Burlington Stables struck at him and bullied him because he was timid, so he took upon himself the feeding and care of the animal.

The programs were now faced with any number of consecutively ambiguous pronouns in passages from 1940’s children’s novels, which made quite a difference. It turns out the organisers had already decided on this last year, as appears from their sensible enough explanation in a members-only AI magazine (Winograd schemas are too hard to compose). Unfortunately they somehow did not see fit to share these changes on the contest website until too late. While the benchmark of 65% had previously been feasible, it now quickly became unlikely that anyone would win anything this year. A number of would-been participants backed out.

The contest finally took place at the IJCAI conference in New York with four contestants: the Open University of Cyprus, the University of Science and Technology of China, the independent Denis Robert from France, and myself from the Netherlands. Curiously absent were a number of American universities who had previously reported successes of over 70% for solving Winograd schemas. The absence of Google, IBM, and other commercial powerhouses was less strange, if you consider that the winner was obligated to publish their methods so that others could reproduce them. And that anything below human level would be considered a failure in the media.

The glass is half full
The programs were asked to figure out 60 multiple choice pronouns, with such ambiguity that they were to be solved through an understanding of the context. With two to five potential answers per pronoun, the baseline score for guesswork was 45%. $1000 would be awarded for a 65% score, and $25000 for a 90% score, human level.
(Note: these are the scores after recount. There was some confusion as my program had omitted two answers)

Contestant Correct answers out of 60 Method
Quan Liu 35 / 35 / 29 (58% – 48%) deep neural network & ConceptNet
Nikos Isaak 29 (48%) probabilistic engine & knowledge extraction
Patrick Dhondt 29 (48%) logical axioms
Denis Robert 19 (32%) logical inferences

Quan Liu’s group entered three programs, which is a little unorthodox for contests. But if you see this as a scientific test then it makes sense to test which configuration of a neural network works best. Their machine learning approach gathered pairs of events (mainly verbs) that are commonly associated, e.g. “rob -> be arrested”, and then applied their probability of co-occurring. Two of their versions scored the highest, 58%, which is consistent with the track record of similar approaches.

There were nevertheless no winners that reached the 65% threshold. On the one hand one could say that technology is literally halfway human ability, on the other hand the programs did only a little better than one might by chance. Any conclusion drawn from just the scores is premature. If this test is to be a meaningful measure of progress, we should look at which areas the programs were better or worse in. To this I can at least answer about my own program.

Winograd schemas vs prose
The ambiguity in the new prose form was actually not so bad compared to previously published Winograd schemas. But the phrasing was often excessively long-threaded with all sorts of interjected tangents. Although I built my program for reading articles and dialogue alike, I had not covered the grammar of interrupting phrases that break up the main thread of a sentence. Such sentence structures are abundant in story novels but do not occur in Winograd schemas, and I wasn’t planning on having my A.I. read novels any time soon. The inclusion of some 1940’s vocabulary also complicated matters: “cook-shanty”, “red-letter days”, “a pallid young dandy”? Maybe it’s because I’m Dutch, but I can only guess what these are.

Compared to the wide variety of common sense axioms that I had programmed (see How to teach a computer common sense*), many solutions to the pronouns were ordinary cases of continuity. E.g. a pronoun with an active role typically refers to the last active-role noun (You won’t find this rule in a grammar book, because ambiguous pronouns are grammatically “incorrect” to begin with).

Always before, Larry had helped Dad with his work. But he could not help him now […]
The donkey wished a wart on its hind leg would disappear, and it did.
Mark was close to Mr. Singer’s heels. He heard him calling for the captain […]

This makes sense when you’re testing on novels: No storyteller wants to write in such a counter-intuitive way that the reader has to stop and think about it, contrary to Winograd schemas which are designed for exactly that purpose.
Where no particular common sense axiom applied, rules of continuity and grammar chose 21 of my 29 correct answers. Thus the majority of my success seemed not due to the application of common sense, but due to conventional writing. Curious, I ran the test again with all axioms disabled except continuity. The result was an equal amount of correct answers, but much more randomly distributed and obviously chosen for the wrong reasons. The common sense axioms clearly contributed by fencing off the exceptions to continuity, so the cause of the mistakes lay elsewhere.

A closer look at the results
The results below show which of the 60 pronouns my program got correct, which axioms were applicable, and/or which problems hindered their conclusion. Where no axiom applied or a problem occurred, the program defaulted to the grammatically correct choice: The candidate closest to the pronoun. Only 1/3rd of all pronouns actually conformed with this grammar rule, which explains why whenever a problem occurred, the answer was typically wrong.

I will highlight the most prominent mistakes:

2 & 3. Always before, Larry had helped Dad with his work. But he could not help him now […]

Logic could expect Dad to return the favour, were it not that “always” and “now” suggest a continuity, which the program did not pick up on. Consequently, the answers to both “he” and “him” were switched around. This also highlights why this test was more difficult than chance: The more ambiguous pronouns a passage contained, the more likely a mistake in one would carry over to the others.

9. What about the time you cut up tulip bulbs in the hamburgers because you thought they were onions?

For this the program compared the similarities of bulbs, hamburgers and onions, but of course knowledge of onions was lacking in the database, so the inference fell flat. Retrieving knowledge from the internet would slow things down, and though speed is no issue in a contest, in daily practice I want my program to read one page per second, not one sentence per second.

13. […] Antonio, takes Larry under his wing.

People aren’t known to have wings, otherwise the bodypart location paradox would have excluded Larry from being taken under his own wing. Alternatively one would have to know figurative meanings of English idioms, an added layer of difficulty.

18. [Maude…] had left poor little Dora to do the best she could, alone.

The program considers “to…” to indicate Maude’s reason for leaving “in order to” do something.

30. […] Mr. Moncrieff has decided to cancel Edward’s allowance on the ground that he no longer requires his financial support.

“Backward” = “back”, “Southward” = “south”, therefore “Edward” = “Ed”. Although the pronoun was interpreted correctly, “Ed” was of course not found among the multiple choice answers.

40. Every day after dinner Mr. Schmidt took a long nap. Mark would let him sleep for an hour, then wake him up, scold him, and get him to work. He needed to get him to finish his work, because his work was beautiful.

As I mentioned in my previous post, the “what goes around comes around” axiom was the least reliable, causing five misinterpretations in this test. Sometimes it triggered on trivial events, other times the events did not make sense this way (scolding to get someone to do good). It had better be limited to events that are direct cause and result, as they had been in most Winograd schemas.

49. Of one thing Mark was sure. Harry knew much less than he did.

Consecutive mental activity is typically by the same person, but of course not when it’s a comparison. Though the context system does distinguish comparisons, the axioms did not.

56. Tatyana managed two guitars and a bag, and still could point out the Freemans: “Isn’t it nice that they have come, Mama!”

While the pronoun was interpreted correctly, there was a technical hitch with selecting “freemans” from the multiple choice answers.

59. Grant worked hard to harvest his beans so he and his family would have enough to eat that winter, His friend Henry let him stack them […]

“enough” was translated to “enough beans” but lost its plural status in the translation, after which the beans were no longer considered a candidate for plural “them”.

Most of these problems are easily fixed and are not inherent to the common sense axioms, apart from #40 and its like. The majority of problems were instead linguistic: Small flaws in the grammar rules, difficulty with long-threaded phrasing, limited coverage of the context system, and problems with the contest’s XML-format interface. It just goes to show how perfect every part of the system has to be before it pays off, and how little you can tell about a program’s abilities from the surface.

The language barrier
As a test of common sense I found this setup less suitable than the original plan with Winograd schemas, who were more concise and profound in which areas of common sense they tested (e.g. spatial relations, physics, social interactions). Had I known from the start that the qualifying round would mainly feature novel prose, I would probably not have embarked on this challenge. Now the prose passages contained too many variables to tell whether results were due to language or common sense, and it never got to the Winograd schema round. This puts us back at the Turing Test where it’s either everything or nothing, and that is not a useful measure of progress. Swapping the rounds would be a good idea for next time.

It was nice to see serious competitors with a wide variety of technology tackling the problem, and although the overall results are unimpressive, I am pleased that my partial solution did as well as some academic efforts, with a minimum of resources at that. I am not disappointed in my common sense axioms as many of them were well applicable in this test, including all the pronouns that weren’t graded. I will broaden their application to ambiguous locations and indirect object relations, where I have greater need for them.

However, my main interest is the development of intelligent processes and I do not intend to linger on this aspect of language processing more than necessary. It is worth remembering that much can be said without ambiguity. Though common sense has widespread application, it ultimately serves to filter and limit possibilities, while the possibilities in areas like problem solving and planning have yet to expand. For that reason I do not expect human levels of common sense to be reached within ten years either, but we can certainly make strides towards.

The A.I. dictionary

The fields of A.I. are brimful of specialised technical jargon. It is no wonder that it is hard for computers to understand us when the research itself is incomprehensible from one field to another. So I’ve listed some translations of common terms to layman’s terms. These definitions should not be taken too seriously, but are roughly true in the sense that they are used, in my opinion.

Index A – I
Press ctrl-F to search. Alphabetical order is overrated.

Philosophical concepts
intelligence = what you think it is
real intelligence = denial of previous definition
true intelligence = denial of all definability of intelligence
the AI effect = any feat of intelligence is denied once understood
consciousness = see sentience
sentience = see consciousness
common sense = applied common knowledge
symbol = a word
symbol grounding = connecting words to physical experiences
the symbol grounding problem = words are just letters without meaning
the Turing test = a question-answer game in which AI has to beat humans at being human
the Chinese Room argument = an analogy comparing a computer to a postal worker who doesn’t understand Chinese correspondence
the three laws of robotics = conflicting safety instructions for robots from a science fiction plot
the singularity = the robot apocalypse
in 15 years = beyond my ability to predict
in 50 years = when I can no longer be held accountable for my prediction

A.I. on a scale of zero to infinite
Artificial Intelligence (1) = machines that do intelligent things
Artificial Intelligence (2) = Terminators
intelligent systems = AI that does not want to be associated with Terminators
algorithm = an exact sequence of instructions to compute an outcome, expressible in algebra
narrow AI = AI designed for specific tasks
weak AI = AI with less than all abilities of a human
strong AI = AI with all abilities of a human
Artificial General Intelligence = AI with all abilities of a human
Artificial Super Intelligence = AI with greater abilities than a human
friendly AI = AI that is programmed not to kill humans despite its super intelligence

Types of A.I.
symbolic AI = any AI that uses words as units
Good Old-Fashioned AI = AI that processes words through a large number of programmed instructions
rule-based system = AI whose knowledge consists of a checklist of “if A then B” rules
Expert System = AI that forms decisions through a checklist of “if A then B” rules composed by field experts
Genetic Algorithm = randomised trial-and-error simulations, repeated x1000 with whatever worked best so far
Big Data = such large amounts of data that it takes AI to make sense of it
neuron = a tiny bit of code that passes a number on to other neurons like a domino brick
Neural Network = AI that maps out patterns with digital domino bricks, then recognises things that follow the same patterns
works like the human brain = uses a neural network, only similar in an abstract way

A.I. techniques
fuzzy logic = decimal values
Markov chain = random choice of remaining options
machine learning (1) = any machines that learn
machine learning (2) = specifically neural networks that learn
deep learning = consecutive layers of neural networks that learn, from crude to refined
supervised learning = telling an AI what stuff is
unsupervised learning = hoping an AI will figure everything out by itself
reinforcement learning = learning through reward/punishment, often through a scoring system
training = feeding a neural network a heap of text, images or sounds to learn from

Language processing techniques
Natural Language Processing = reading text
Natural Language Generation = writing text
corpus = bunch of text
token = a word
lemma = a root word
word sense = which meaning of a word is meant: “cat” the animal or “cat” the nine-tailed whip
concept = a set of words that are related to a certain topic
bag-of-words = a listing of all the words in a text, used to categorise its topic
stop words = trivial words to be filtered out, like “the”, “on”, “and”, “etc.”
keywords = words that trigger something
intent = a computer command triggered by keywords
pattern matching = searching for keywords and key phrases in a sentence
N-grams = pairs of commonly adjacent words, used in spellchecks and speech recognition.
word vector = a list of the distances between one word and its frequently neighbouring words
Named Entity Recognition = finding names in a text
Context-Free Grammar = textbook grammar only
Part-of-Speech tagging = marking words as adjectives, verbs, nouns, etc.
grammar parser = software that marks words as adjectives, verbs, noun phrases, and how they are related
semantic parser = software that marks the roles of words: who is doing what to whom and where
parse tree = a branching list displaying the syntax of a sentence
coreference resolution = figuring out what “he”, “she” or “it” refers to.
speech acts = arbitrary categories of things one can say, like greetings, questions, commands…
discourse analysis = research that arbitrarily categorises small talk
dialogue manager = a system that tracks what was said and directs the conversation
sentiment analysis = checking whether words are in the “naughty” or “nice” list, to detect opinion or emotion
First Order Logic = writing relations between words as a mathematical notation
semantic ontology = encyclopedia for machines

Speech processing techniques
voice recognition = recognising tone and timbre of someone’s voice
speech recognition = translating speech to text
Text-To-Speech = the reverse of speech recognition
phoneme = a vowel or consonant sound
grapheme = a bundle of letters representing a spoken sound
phonetic algorithm = code that spells words the way they are pro-naun-see-ate-d

To be continued.