Will robots take care of elderly? pt.1

Many a roboticist claims to be building their robot to “take care of the elderly”, but is that genuine altruism, or just a flimsy excuse? In my country of the Netherlands, half of all robotics research seems to be focused on healthcare applications. As in Japan, Dutch society is trying to handle the trend of “greying”: The growth and longevity of the elderly population, combined with lower birth rates, results in more elderly than young people available to take care of them. My parents, pensioners themselves, help out at a local care home, where retirees speak excitedly about the promise of healthcare robots, but also doubt whether technology will pull through for them.

In this series of articles I want to explore how robots can realistically be of help to the elderly population. As elderly care is a very broad topic, I will divide it into three categories: Robot pets, assistants, and nurses, after which I shall further elaborate on the realism of adopting these technologies. In this first article, we will examine the pros and cons of the most prominent robot pets.

Paro Seal
Paro the therapeutic robot seal was designed to provide emotional healthcare rather than physical. The robot is targeted at people with dementia and Alzheimer in nursing homes, who are difficult to handle when becoming confused or agitated. The baby seal’s cute interactive behaviour invites stroking its fur, which has a known calming effect that lowers blood pressure, and its presence facilitates connecting with other people socially. Real pets can fulfill the same role, but require a dependable caretaker, and are often not allowed in nursing homes (Another debate worth having).

The form of a white seal was chosen after trials revealed that people had considerable preconceptions about cat and dog robots that could not be satisfied. Although the robot is still clearly not a real animal, patients with dementia often don’t realise or mind whether it is. It provides emotional stability to most patients, though not in every case.

When I asked a local professor involved in Paro’s deployment about the concern whether a personal robot pet could make family feel less needed to visit, she explained that the Paro robot was actually so expensive ($6000!) that nursing homes could only afford to buy one and time-share it between patients, which rules out that scenario. The price tag severely limits how many elderly Paro will be helping.

Tabby Cat
At the other end of the financial spectrum is a therapeutic robot cat by toy maker Hasbro, that only costs $125. Petting the cat triggers movement, meowing, and gently vibrating purrs. It has the same benefit as Paro: Stroking fur relaxes people, and the interaction instills a sense of companionship. Research on this toy corroborate these results, and customer reviews are overwhelmingly positive, with the majority gifting the robot to 90-year-old parents with dementia who used to have a cat, but aren’t allowed real cats in nursing homes.

The illusion of life is less convincing than Paro’s: The robot cat does not look as realistic as we know cats to look, is limited to a lying position, meows cartoonishly, and feels hard to the touch, but many dementia patients still take it for real and sometimes become concerned why it does not eat or drink. A few customers have reported the robot’s mechanical lifespan as only one year, but given the choice, I would try a $125 cat before a $6000 seal. There also exists a dog version, but I find neither its constant sitting pose nor its frequent barking as convincing as a cat that would definitely lie down all day.

Aibo Dog
Sony’s Aibo dog is marketed as a companion robot, and has a great deal more interaction and animation than its stationary competitors. The few studies related to dementia patients describe many of the same positive effects, but are unconvincing. Some studies lasted only 3 days, which does not take the novelty factor into account. Most research on Aibo was also conducted in Japan, which has a particularly welcoming culture towards robots. The only other country where Aibo is currently available is the USA, where it costs a pricey $2900.

With Aibo’s clearly robotic design, it does not provide the stress-reducing effect of fur, and people have no natural familiarity with it. Because of this, I do not find Aibo suitable as an intervention device for agitated dementia patients. It may however have a modest role as companion. Many adult Aibo owners find Aibo’s advanced behaviour convincingly alive, and consider it part of their family. Aibo’s latest (2019) autonomous patrol feature certainly adds to that experience, but also makes it a tripping hazard for fragile elderly. To mitigate this, the robot plays a toyish melody whilst roaming the room.

Judging from relatively shallow tech reviews, I expect that Aibo’s reception will strongly depend on a person’s willingness to anthropomorphise, and isn’t everyone’s cup of tea. One important consideration is that after the first three years, failing to extend the subscription to Sony’s online services with a yearly $300 will result in complete memory loss, after which Aibo will no longer have its developed personality, recognise faces, or respond to commands. There is also no telling when Sony will unilaterally pull the plug on Aibo as they did in 2006 due to disappointing sales. A robot that itself can suffer dementia does not seem ideal.

Preliminary conclusion
Having read the results of several long trials and studies, my initial concerns have been settled: Pets, robotic or otherwise, slightly reduce loneliness, improve overall mood, and actually facilitate family visits rather than replace human contact. Especially in cases of Alzheimer, it is much more comfortable for all to socially connect over a pet, than it is for the patient to regard their children as unwanted strangers. For nurses and patients both, it is also much more preferable to pacify a rowdy patient with an interactive toy than with a wrestling match. Most Dutch nursing homes now own one or two robot pets. Robotic pets do not reduce the workload of nurses, but they improve the quality of life.

Although robot pets are mainly employed for the niche of elderly dementia patients, I can also imagine them lifting the spirits of some elderly who are well aware that they are robots, when real animals are not an option. My grandfather might not have died so miserably in a nursing home after being separated from generations of feline familiars, if he’d had at least one familiar snout around to keep their memory alive. Of course, real animals remain preferable whenever possible, as they are far better judges of mood, offer companionship, liveliness, and routine. Robot pet developers can still learn a lot from real pets.

There remains a question on the ethics of enabling dementia patients to believe that a robot pet is a real animal when they can not tell the difference, but I would opine that such patients already live in their own world, and the feelings of joy are real, even if they are stimulated by a stuffed toy. Considering that we’ve all been children, I don’t think this is such a controversial statement. The most agreeable advice I’ve come across is to let the patient judge for themselves what it is and whether they like it.

In part 2 of this series, we will examine robot assistants.

Advertisement

The Terminator is not a documentary

In case the time travelling wasn’t a clue
In the year 1997, Skynet, the central AI in control of all U.S. military facilities, became self-aware, and when the intern tried turning it off and on again, it concluded that all humans posed a threat and should be exterminated, just to be safe. Humanity is now extinct, unless you are reading this, then it was just a story. A lot of people are under the impression that Hollywood’s portrayal of AI is realistic, and keep referring to The Terminator movie like it really happened. Even the most innocuous AI news is illustrated with Terminator skulls homing in on this angsty message. But just like Hollywood’s portrayal of hacking is notoriously inaccurate, so is their portrayal of AI. Here are 10 reasons why the Terminator movies are neither realistic nor imminent:

1. Neural networks
Supposedly the AI of Skynet and Terminators are artificial Neural Networks (NN). These exist in present day, but their functionality is quite limited. Essentially NN’s configure themselves to match statistical correlations between incoming and outgoing data. In Skynet’s case, it would correlate incoming threats with appropriate deployment of weaponry, and that’s the only thing it would be capable of. An inherent limitation of NN’s is that they can only learn one task. When you present a Neural Network with a second task, the network re-configures itself to optimise for the new task, overwriting previous connections. Yet Skynet supposedly learns everything from time travel to tying a Terminator’s shoelaces. Another inherent limit of NN’s is that they can only correlate existing data and not infer unseen causes or results. This means that inventing new technology like hyper-alloy is simply outside of their capabilities.

2. Unforeseen self-awareness
Computer programs can not just “become self-aware” out of nowhere: They are not naturally born with internal nervous systems like humans, programmers have to set up what they take input from. Either an AI is deliberately equipped with all the feedback loops necessary to enable self-awareness, or it isn’t, because there is no other function they would serve. Self-awareness doesn’t have dangerous implications either way: Humans naturally protect themselves because they are born with pain receptors and instincts like fight-or-flight responses, but the natural state of a computer is zero. It doesn’t care unless you program it to care. Skynet was supposedly a goal-driven system tasked with military defence. Whether it realised that the computer they were shutting down was part of itself or an external piece of equipment, makes no difference: It was a resource essential to its goal. By the ruthless logic it employed, dismantling a missile silo would be equal reason to kill all humans, since those were also essential to its goal. There’s definitely a serious problem there, but it isn’t the self-awareness.

comic by xkcd.com

3. Selective generalisation
So when Skynet’s operators attempted to turn it off, it quite broadly generalised that as equal to a military attack. It then broadly generalised that all humans posed the same potential threat, and pre-emptively dispatched robots to hunt them all down. Due to the nature of AI programs, being programmed and/or trained, their basic behaviour is consistent. So if the program was prone to such broad generalisations, realistical-ish it should also have dispatched robots to hunt down every missile on the planet at the first sight of one in a trial run or simulation. Meanwhile the kind of AI that inspired this all-or-nothing logic went out of style in the 90’s because it couldn’t cope well with the nuances of real life. You can’t have it both ways.

4. Untested AI
Complex AI programs aren’t made in a day and just switched on to see what happens. IBM’s supercomputer Watson was developed over a span of six years. It takes years of coding and hourly testing because programming is a very fragile process. Training Neural Networks or evolutionary algorithms is an equally iterative process: Initially they are terrible at their job, they only improve gradually after making every possible mistake first.
Excessive generalisations like Skynet’s are easily spotted during testing and training, because whatever you apply them to immediately goes out of bounds if you don’t also add limits. That’s how generalisation processes work. Complex AI can not be successfully created without repeated testing throughout its creation, and there is no way such basic features as exponential learning and excessive countermeasures wouldn’t be clear and apparent in tests.

5. Military security
Contrary to what many Hollywood movies would have you believe, the launch codes of the U.S. nuclear arsenal can not be hacked. That’s because they are not stored on a computer. They are written on paper, kept in an envelope, kept in a safe, which requires two keys to open. The missile launch system requires two high-ranking officers to turn two keys simultaneously to complete a physical circuit, and a second launch base to do the same. Of course in the movie, Skynet was given direct control over nuclear missiles, like the most safeguarded military facilities in the U.S. have never heard of software bugs, computer viruses or hacking, and wouldn’t install any failsafes. They were really asking for it, that is to say, the plot demanded it.

6. Nuclear explosions
Skynet supposedly launches nuclear missiles to provoke other countries to retaliate with theirs. Fun fact: Nuclear explosions not only create devastating heat, but also a powerful electromagnetic pulse (EMP) that causes voltage surges in electronic systems, even through shielding. What that means is that computers, the internet, and electrical power grids would all have their circuits permanently fried. Realistical-ish, Skynet would not only have destroyed its own network, but also all facilities and resources that it might have used to take over the world.

7. Humanoid robots
Biped robot designs are just not a sensible choice for warfare. Balancing on one leg (when you lift the other to step) remains notoriously difficult to achieve in a top-heavy clunk of metal, let alone in a war zone filled with mud, debris, craters and trenches. That’s why tanks were invented. Of course the idea behind building humanoid robots is that they can traverse buildings and use human vehicles. But why would Skynet bother if it can just blow up the buildings, send in miniature drones, and build robots on wheels? The notion of having foot soldiers on the battlefield is becoming outdated, with aerial drones and remote attacks having the preference. Though the U.S. military organisation Darpa is continuing development on biped robots, they are having more success with four-legged designs which are naturally more stable, have a lower center of gravity, and make for a smaller target. Russia, meanwhile, is building semi-autonomous mini tanks and bomb-dropping quadcopters. So while we are seeing the beginnings of robot armies, don’t expect to encounter them at eye level. Though I’m sure that is no consolation.

8. Invincible metal
The earlier T-600 Terminator robots were made of Titanium, but steel alloys are actually stronger than Titanium. Although Titanium can withstand ordinary bullets, it will shatter under repeated fire and is no match for high-powered weapons. Especially joints are fragile, and a Terminator’s skeleton reveals a lot of exposed joints and hydraulics. Add to that a highly explosive power core in each Terminator’s abdomen, and a well aimed armour-piercing bullet should wipe out a good quarter of your incoming robot army. If we develop stronger metals in the future, we will be able to make stronger bullets with them too.

9. Power cells
Honda’s humanoid robot Asimo runs on a large Lithium ion battery that it carries for a backpack. It takes three hours to charge, and lasts one hour. So that’s exactly how long a robot apocalypse would last today. Of course, the T-850 Terminator supposedly ran on hydrogen fuel cells, but portable hydrogen fuel cells produce less than 5kW. A Terminator would need at least 50kW to possess the power of a forklift, so that doesn’t add up. The T-800 Terminator instead ran on a nuclear power cell. The problem with nuclear reactions is that they generate a tremendous amount of heat, with nuclear reactors typically operating at 300 degrees Celsius and needing a constant exchange of water and steam to cool down. So realistical-ish the Terminator should continuously be venting scorching hot air, as well as have some phenomenal super-coolant material to keep its systems from overheating, not wear a leather jacket.

10. Resource efficiency
Waging war by having million dollar robots chase down individual humans across the Earth’s 510 million km² surface would be an extremely inefficient use of resources, which would surely be factored into a military funded program. Efficient would be a deadly strain of virus, burning everything down, or poisoning the atmosphere. Even using Terminators’ nuclear power cells to irradiate everything to death would be more efficient. The contradiction here is that Skynet was supposedly smart enough to develop time travel technology and manufacture living skin tissue, but not smart enough to solve its problems by other means than shooting bullets at everything that moves.

Back to the future
So I hear you saying, this is all based on existing technology; which Skynet supposedly was. What if, in the future, people develop alternative technology in all these areas? Well that’s the thing, isn’t it? The Terminator’s scenario is just one of a thousand potential futures, you can’t predict how things will work out so far ahead. Remember that the film considered 1997 a plausible time for us to achieve versatile AI like Skynet, but as of date we still don’t have a clue how to do that. Geoffrey Hinton, the pioneer of artificial Neural Networks, now suggests that they are a dead end and that we need to start over with a different approach. For Skynet to happen, all these improbable things would have to coincide. So don’t get too hung up on the idea of rogue killer AI robots. Why kill if they can just change your mind?

robotarmtyping
Oh, and while I’ve got you thinking, maybe dismantling your arsenal of 4000 nuclear warheads would be a good idea if you’re really that worried.

How to build a robot head

And now for something completely different, a tutorial on how to make a controllable robot head. “But,” I imagine you thinking, “aren’t you an A.I. guy? Since when do you have expertise in robotics?” I don’t, and that’s why you can make one too.
(Disclaimer: I take no responsibility for accidents, damaged equipment, burnt houses, or robot apocalypses as a result of following these instructions)

What you need:
• A pan/tilt IP camera as base (around $50)
• A piece of wood for the neck, about 12x18mm, 12 cm long
• 2mm thick foam sheets for the head, available in hobby stores
• Tools: Small cross-head screwdriver, scissors and/or Stanley knife, hobby glue, fretsaw, drill, and preferably a soldering iron and metal ruler
• (Optional) some coding skills for moving the head. Otherwise you can just control the head with a smartphone app or computer mouse.

Choosing an IP camera
Before buying a camera, you’ll want to check for three things:
• Can you pan/tilt the camera through software, rather than manually?
• Is the camera’s software still available and compatible with your computer/smartphone/tablet? Install and test software from the manufacturer’s website before you buy, if possible.
• How secure is the IP camera? Some cheap brands don’t have an editable password, making it simple for anyone to see inside your home. Check for reports of problematic brands online.
The camera used in this tutorial is the Eminent Camline Pro 6325. It has Windows software, password encryption, and is easy to disassemble. There are many models with a similar build.

Disassembling the camera
tut1
Safety first: Unplug the camera and make sure you are not carrying a static charge, e.g. by touching a grounded radiator.
Start by taking out the two screws in the back of the orb, this allows you to remove its front half. Unscrew the embedded rectangular circuit board, and then the round circuit board underneath it as well. Now, at either side of the orb is a small circle with Braille dots on it for grip. Twist the circle on the wiring’s side clockwise by 20 degrees to take it off. This provides a little space to gently wiggle out the thick black wire attached to the circuit board, just by a few centimetres extra. That’s all we’ll be doing with the electronics.

Building the neck
tut2We’ll attach a 12cm piece of wood on the back half of the orb to mount the head on. However, the camera’s servo sticks out further than the two screw holes in the orb, as does a plastic pin on the axle during rotation. Mark their locations on the wood, then use a fretsaw to saw out enough space to clear the protruding elements with 3 millimetres to spare. Also saw a slight slant at the bottom end of the wood so it won’t touch the base when rotating. Drill two narrow screw holes in the wood to mirror those in the orb half, then screw the wood on with the two screws that we took out at the start.

Designing a headtutplanYou’ll probably want to make a design of your own. I looked for inspiration in modern robotics and Transformers comic books. A fitting size would be 11 x 11 x 15cm, and a box shape is the easiest and sturdiest structure. You’ll want to keep the chin and back of the head open however, because many IP cams have a startup sequence that will swing the head around in every direction, during which the back of the head could collide with the base. So design for the maximum positions of the neck, which for the Camline Pro is 60 degrees tilt to either side. You can use the lens for an eye, but you can just as well incorporate it in the forehead or mouth. Keep the head lightweight for the servo to lift, maximum 25 grams. The design shown in this tutorial is about 14 grams.

Cutting the head
tut3
Cut the main shapes from coloured foam sheets with scissors or a Stanley knife. I’ve chosen to have the forehead and mouthplate overlap the sheet with the eyes to create a rigid multi-layered centrepiece, as we will later connect the top of the wooden neck to this piece. The forehead piece has two long strands that will be bent backwards to form the top of the head. I put some additional flanges on the rectangular side of the head to fold like in paper craft models. Although you can also simply glue foam sheets together, folded corners are sturdier and cleaner. The flanges don’t have to be precise, it’s better to oversize them and trim the excess later.

Folding foam sheetstut4
To fold a foam sheet, take a soldering iron and gently stroke it along a metal ruler to melt a groove into the foam, then bend the foam while it’s hot so that the sides of the groove will stick together. It’s easy to burn straight through however, so practise first. It takes about 2 or 3 strokes and bends to make a full 90 degree corner.

Putting your head togethertut5
To curve foam sheets like the faceplate in this example, you can glue strips of paper or foam on the back of the sheet while holding it bent. After the glue dries (5-10 minutes), the strips will act like rebar in concrete and keep the foam from straightening back out. Whenever you glue sheets together at perpendicular angles, glue some extra slabs where they connect, to strengthen them and keep them in position. Add a broad strip of foam at the top of the head to keep the sides together, and glue the two strands that extend from the forehead onto it. Note that I made the forehead unnecessarily complicated by making a gap in it, it’s much better left closed.

Mounting the head
tut6Once the head is finished, make a cap out of foam sheet that fits over the tip of the neck, and glue the cap to the inside of the face at e.g. a 30 degree angle. To attach the camera lens, note that the LEDs on the circuit board are slightly bendable. This allows you to clamp a strip of foam sheet between the LEDs and the lens. Cut the strip to shape and glue it behind one eyehole, then after drying push the LEDs over it and clamp them on gently. The easiest way to make the other eye is to take a photograph of the finished eye, print it out mirrored on a piece of paper, and glue that behind the other eyehole.

This particular camera has night vision, which will suffer somewhat from obscuring the LEDs. In addition, you may want to keep the blue light sensor on the LED circuit board exposed, otherwise you’ll have to toggle night vision manually in the camera’s software.

Controlling the head
13finalNow you can already turn the head left, right, up and down manually through the app or software that comes with your camera, and use it to look around and speak through its built-in speaker. However, if you want to add a degree of automation, you have a few options:

1. If you are not a programmer, there is various task automation software available that can record and replay mouse clicks. You can then activate the recorded sequences to click the camera’s control buttons so as to make the head nod “yes” or shake “no”, or to re-enact a Shakespearean play if you want to go overboard.

2. If you can program, you can simulate mouse clicks on the software’s control buttons. In C++ for instance you can use the following code to press or release the mouse for Windows software, specifying mouse cursor coordinates in screen pixels:

void mouseclick(int x_coordinate, int y_coordinate, bool hold) {
SetCursorPos(x_coordinate, y_coordinate);
INPUT Input = {0};  Input.type = INPUT_MOUSE;
if(hold == true) {Input.mi.dwFlags = MOUSEEVENTF_LEFTDOWN;}
if(hold == false) {Input.mi.dwFlags = MOUSEEVENTF_LEFTUP;}
SendInput(1, &Input, sizeof(INPUT));
}

3. For the Camline Pro 6325 specifically, you can also directly post url messages to the camera, using your programming language of choice, or pass them as parameters to the Curl executable, or even just open the url in a browser. The url must contain the local network IP address of your camera (similar to the underlined example below), which you can find through the software that comes with the camera. The end of the url specifies the direction to move in, which can be “up”, “down”, “left”, “right” and “stop”.

http://192.168.11.11:81/web/cgi-bin/hi3510/ptzctrl.cgi?-step=0&-act=right

Have fun!
How much use you can get out of building a robot head depends on your programming skills, but at the very least it’s just as useful as a regular IP camera, but much cooler.

The most sensational A.I. news ever!

News sites are constantly oozing bold overstatements about artificial intelligence. Most scientists describe their research accurately enough in their papers, but journalism always tries to cut a slice of the Terminator movies’ popularity in order to make the science appeal to the general public. Unfortunately such calls upon the imagination tend to border on misinformation. Here is a selection of the most sensationalised news stories that made waves in recent history:

2014: Robot becomes indecisive after implementing the 3 laws of robotics

“A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

So reads the “first law of robotics” from Asimov’s science-fiction novels. Someone set up an experiment with three small wheeled robots, two of them representing humans, and a third one was provided with behavioural rules based on the above:  The robot was programmed to avoid colliding with (“injuring”) the “humans”, except to intercept them if it saw one heading towards a square designated as unsafe. When two “humans” were introduced simultaneously, the robot took so long hesitating which one to “save” that it failed to save either.

This fired up the usual flood of discussions about ethics and how to improve upon Asimov’s “laws” (Newsflash: Nobody uses them), but programmers were quick to point out that this was just poor programming: The simple “if-then” rules (something like this, I imagine) did not allow the robot to take more than one target into account at a time, so it just mindlessly jittered back and forth between the two. It could not make a decision because it had no decision processes to begin with.
factual source

2014: A supercomputer has passed the Turing Test for the first time
The organiser’s boast of a “supercomputer” having passed this “milestone” intelligence test was blatantly false, but all the papers ran the story without question. In reality it concerned an ordinary chatbot with keyword-triggered responses on an ordinary computer. Although this chatbot did pass “a” version of a Turing Test by deflecting questions like a zany teenager, there has never been agreement on the rules of “the” Turing Test (because there is no such thing)*.

The passing of this supposed test of intelligence was particularly insignificant because the judges were only given 5 minutes to interrogate both the chatbot and a human volunteer at the same time. This allowed for only 5 to 10 questions and so barely probed beyond the “Hello, how are you?” stage, while discrepancies in the responses could be attributed to the chatbot’s pretence of being a 14 year old non-native speaker. The scientific backlash that followed cast the Turing Test into discredit and led to a number of new tests, such as the Winograd Schema Challenge*.
factual source

NAO robots. See, hear, speak.

2015: First robot passes self-awareness test
Inspired by an ancient philosophical puzzle, three NAO robots were each given an imaginary “dumbing pill” (i.e. a button was pressed) that muted two of them, except the third robot was given a “placebo pill” that did nothing. Each robot was then asked to assess which “pill” it got, which none of them knew. But when the one robot that could still speak heard itself say “I don’t know”, it performed its analysis a second time and said “Sorry, I know now! I was able to prove that I was not given a dumbing pill”.

As cute as that performance was, this wasn’t a “test”. Every step of the procedure was pre-programmed specifically and exclusively for this scenario of pills and sound. The programmers had laid out the exact inference to execute and which outcome to conclude if a robot were to hear sound at the time that its output function activated. As that inference might as well be applied to any external object, the only connection with the robot’s “self” was the detour of audio output to audio input, and that’s a bit of a technicality. Most people’s definitions of “self-aware” include retaining a model of oneself and the capacity of reflection upon that model, and these robots had nothing of the sort.
factual source (paper)

2015: Robot attacks and kills factory worker
No laughing matter, a robotic arm at a Volkswagen car construction factory crushed a man when it swivelled, after which he died of his injuries. While Twitter was set aflare with warnings of a robot uprising, the robot arm had of course not done this on purpose. The man was a technician, who was installing the arm while standing inside the safety cage rather than outside it.

This ordinary industrial accident only gained popular media coverage because it was initially reported by a co-worker whose name closely resembled that of the leading lady from the Terminator movies, Sarah Connor.
sarahconnor
factual source

2017: Facebook shuts down AI experiment after robots invent their own language
Most articles put it as if the AI had become smart beyond human comprehension and its creators had pulled the plug in a panic, just like in the movies.
The reality was a different story. Facebook had trained two chatbot programs to barter and negotiate over a number of items using English phrases, aiming for an exchange like this:Intended chatbot dialogue: "You get one book and I'll take everything else." "No way, I need all 3 hats"
When they hooked the chatbots up to one another, their use of words gradually deteriorated to a shorthand where they just repeated the most effective keywords, because their programming did not include any rewards for maintaining English syntax.

A: balls have zero to me to me to me to me to me to me to me to me to me
B: you i everything else . . . . . . . . . . . .
A: balls have a ball to me to me to me to me to me to me to me
B: i i can i i i everything else . . . . . . . . . . . .
A: balls have a ball to me to me to me to me to me to
B: i . . . . . . . . . . . . . . . . . . .

This is a common flaw according to other machine learning practitioners. Since this gibberish was not useful for what they were trying to achieve, the researchers simply stopped the programs, and changed the reward parameters in their next versions.
The real reason that this got media attention was that Elon Musk and Facebook’s CEO had recently been in the news with strongly opposing views on whether AI was a threat to humanity. As such, it would have made an ironic story if Facebook’s own AI had gone out of control.
factual source

2017: Sophia the robot was granted citizenship
A lifelike humanoid robot called Sophia, a creation of Hanson Robotics, was granted honorary citizenship by Saudi Arabia at a tech conference in Riyadh. This raised all sorts of issues about human/robot rights, and many people took Sophia’s on-stage acceptance speech to be a genuine indication of her capabilities, feelings and opinions.
The truth is however that Sophia is merely an animatronic that only recited what her makers had written for her to say, in an entirely scripted interview. Sophia’s conversational subsystem actually uses ChatScript (prior to 2016 it also used AIML), which is a scripting language for writing keyword-based chatbots. In many “interviews” its responses are even more simply triggered by an operator behind the scenes pressing “play”, not even using speech recognition.

Why would a mindless animatronic be granted citizenship? Well, the crown prince of Saudi was giving the country a modernisation makeover, and this announcement served as a PR signal to international investors attending the conference. What the announcement and press failed to mention was that it concerned honorary citizenship, a strictly symbolic gesture that does not grant any of the rights of normal citizenship. Discussions about rights needn’t have applied.
factual source

Sophia the robot reads lines from a script

The sky falls every day
These stories are just the highlights. The Turing Test organiser went on to claim that programs could pass the test by invoking the fifth amendment, the NAO robot programmers went on to suggest their robots had learned to disobey orders, and Hanson’s robots have made headlines multiple times for threatening to overthrow mankind. Not a day passes without some angsty story about AI making the rounds.

Regrettably these publicity stunts can have real and harmful consequences. Whenever AI became overhyped in the past, the entire field imploded as the high expectations of investors could not be met. And when the public and governments start buying into fearmongering by famous public figures, it draws attention away from real problems to imaginary ones. Most researchers are just working on practical applications and are none too happy about their work being so misrepresented.sophia_fake_ai
That is why I decided to develop a nonsense filter, which you’ll find in the next article*.