Márton Münz's Website

The illusion of making friends

Sept 19, 2023 - Tech, Humans | 15 min read

You can watch the moment on YouTube when ASIMO fell off the stairs.

It was supposed to be his triumphant day. Moments later he would have been applauded for climbing up to the top of the staircase, fully autonomously, in front of all those people. Instead, he stumbled, lost his balance, fell off, and hit the floor hard.

My heart went out to him.

(Why am I using the pronoun “him”, and not “her” or “it”? I’m using “him”, because I’ve read Simson Garfinkel’s article, “Robot Sex”, in the MIT Technology Review, that clarifies: ASIMO, Honda’s humanoid robot, is a boy.)

The incident happened at a demonstration in Japan, in 2006, in front of an astounded audience. ASIMO took the first two steps without problem. At that point, everything looked according to plan. The only thing that made me worried was that he was looking at the audience, instead of the steps ahead.

Then, as you can see in the YouTube video, ASIMO took a misstep, his foot slipped off the stairs, and he crashed onto the floor.

Watch the video. It is fascinating to see Honda’s technicians immediately rushing to cover ASIMO with a protective screen, shielding him from public view as he is laying motionless and helpless.

For me, what’s interesting is not that ASIMO could not make it to the top of the staircase. At the end of the day, “he” is just a piece of metal and plastic affected by the force of gravity. (It is worth noting, he climbed the stairs successfully several times in various other appearances. Since this mishap was recorded in 2006 and until his discontinuation in 2018, his abilities improved significantly. Today, of course, we see much more acrobatic performances by robots like Atlas from Boston Dynamics.)

Not even the embarrassing live demo is the most intriguing part. These things happen. (Remember Elon Musk breaking the unbreakable window of a Tesla Cybertruck?)

For me, the interesting part is the way we, humans, react as an eye witness to ASIMO’s accident. Some of us find it upsetting, or even heartbreaking. Many of us would agree it was a humiliating moment. But humiliating for whom? For Honda’s engineers? Or for ASIMO himself?

See my selection of comments I found under the YouTube video:

  • “I feel sorry for the little guy. He just fell…and died. Really sad.”
  • “he was just too nervous, so many people watching him”
  • “Hahahaha Robot was so overconfident that it was climbing the stairs without watching them :D Aww next time walk while seeing your path baby :P :P”
  • “Poor robot! Regardless of whether it actually felt anything, it get’s my condolences”
  • “omg… poor asimo.”
  • “poor asimo almost feel sorry for it”
  • “thats so sad :(“
  • “poor guy, all he wanted to do was climb stairs.”
  • “R.I.P Asimo :’(“
  • “once ASIMO evolves into an artificial intelligence he will remember this embarrassing ordeal from his youth, and hunt down everyone who posted mean comments about him on youtube!”
  • “It even happens to the best.”
  • “Please folks, he’s injured, lets give him some privacy.”
  • “I hate the way the audience is laughing at the end.”
  • “Why do i feel sad for a robot? I’m more faulty then the technology they used!” Compassion from some people and irony from others. Nevertheless, even commenters who express their amusement or laugh at the video are doing so in a way we laugh at a fellow human being.

This is anthropomorphism, a well studied phenomenon in cognitive psychology.

Seeing humans in nonhumans

We see a robot hitting his head into the ground. Ouch.

Ouch?

Yes, despite being fully aware that ASIMO is just a machine, it’s hard not to feel at least a little bit of empathy. This observation tells more about us, than about him.

Anthropomorphism is the innate tendency of human psychology that we attribute human-like characteristics, traits, emotions, or intentions to nonhuman beings or objects. As a consequence, we find ourselves relating to them as though they were one of us.

Obviously, when I express sympathy towards ASIMO, I immediately recognize this as an irrational thought. I know he doesn’t have feelings, can’t experience pain or embarrassment, so there is no point for me to be compassionate. However, research shows that anthropomorphizing non-human entities is an automatic process. It’s rooted in our cognitive tendencies to understand, predict, and interact with the world around us. Since it is an automatism of my brain, I can hardly escape it, even if I think it is stupid. It’s an instinctual reaction.

Anthropomorphism is deeply rooted in human nature; it plays an important role in many religions and mythologies. It’s in the stories we tell ourselves every day, from children literature and fables to contemporary advertising.

While children are more likely to anthropomorphize (which was shown to be important in their social development), grownups are definitely no exceptions from this tendency. Disney characters with human-like qualities such as Mickey Mouse and Donald Duck may be more popular among children, but anthropomorphism is also a widespread phenomenon in the world of adults.

After all, it is adults, not children, that give human names to hurricanes (“Katrina claimed more than 1,800 lives”, “Andrew caused an estimated $26 billion in damage”).

What’s more, we all know grownups who think of their car as a beloved member of their family, or adults who can get angry with their malfunctioning mobile phone, as if it intentionally caused them inconvenience. And the examples continue.

Weirdly enough, research shows that humans can anthropomorphize anything: pets, toys, vehicles, computers, furniture, musical instruments, weather phenomena, rocks, clouds, household appliances, and even abstract concepts such as luck, time or money.

Some of us even talk to their plants, and go as far as apologizing when they forget to water them.

Our social brain

So what happens inside our head when we engage in conversations with our evergreen Ficus benjamina?

When anthropomorphizing a non-human being or object, a specific part of our brain, a collection of interconnected regions often referred to as our “Social Brain”, becomes active. This involves brain regions such as the prefrontal cortex, temporal-parietal junction, amygdala, anterior cingulate cortex, fusiform face area, and superior temporal sulcus.

These brain regions are key for our social lives as they enable us to interpret the behaviors of others, predict their future actions, understand their intentions and beliefs, and empathize with their feelings. For example, our prefrontal cortex is critical to understand the mental states of other people, while the temporal-parietal junction and the anterior cingulate cortex play crucial roles in our empathy.

In other words, when you talk to your pet or make friends with your car, the same brain regions activate as when you deal with human beings. It is not surprising at all that these connections also feel “real”.

Some of the above brain regions, such as the fusiform face area and superior temporal sulcus, are involved in interpreting faces, facial expressions, and body language - crucial elements of our social cognition. If an object like ASIMO has a body and a face that resembles a human being, such characteristics can further trigger us to anthropomorphize by activating these areas in our brain.

Robots with human faces

Personally, I find it pretty shocking that my empathy, in which I often take so much pride, can be triggered by such a simple trick as equipping a robot with legs, arms, and a cute-looking head.

While ASIMO didn’t have a face in the traditional sense (no nose or mouth), he still had an overly friendly expression. I could not help but feel some strange sort of sympathy towards him. And when I saw his little body laying on the floor appearing helpless, I instinctively reacted in a similar way as I would if I saw a human in a similar situation.

Experts of human-machine interfaces can elevate our relationships with machines to a whole new level by designing systems that tap into these instincts. Understanding the factors that most strongly influence how machines and humans respond to each other is an active area of research in social robotics, a field focused on designing robots that can interact with humans in a socially engaging and efficient way.

Although the appearance of the robot seems to be a key factor (especially its facial features), machines don’t need to be fully human-like. As discussed above, we have a weird ability to anthropomorphize almost anything, even things that don’t remotely resemble a human being.

Humanoid robots (ASIMO by Honda, Sophia by Hanson Robotics, Atlas by Boston Dynamics, T-HR3 by Toyota, and Pepper by SoftBank Robotics), of course, showcase all the tricks to trigger our Social Brain.

Sophia’s face, for example, was modeled after the ancient Egyptian Queen Nefertiti, Audrey Hepburn, and its inventor David Hanson’s wife, Amanda Hanson. Despite knowing it’s an illusion, you can’t deny her eyes are beautiful. If there was a beauty contest for robots already, she would have a good chance of winning the first price.

Of course it also helps that when you talk to her, she doesn’t answer in a robotic, mechanical tone, instead you’re greeted with a lovely, soft woman’s voice with an American English accent, synthesised by a text-to-speech voice generator.

If you also consider the fact that recent AI Chatbots based on large language models now allow us to communicate with computers in our own natural language (human language being one of our most distinguishing features and a crucial milestone in our evolution), you will realize that it will soon be hard for your brain to tell apart illusion and reality.

Perfect humanoids are imperfect

Another thing came to my mind while watching the YouTube video showing ASIMO’s staircase accident:

We are not perfect, and maybe our imperfections should also be copied into these systems if we want them to be truly humanoid. Maybe machines, robots and AI chatbots that make mistakes are more relatable, and can therefore be more successful in getting on well with human beings. Just like we often prefer hanging out with people who do not try to hide their faults.

Unsurprisingly, research in human-AI interaction has found that AI systems which recognize and admit their own mistakes, and apologize, are perceived as more intelligent and likeable by humans. Attributing the blame to themselves, and apologizing was proven to be a very effective way to regain and maintain the trust of humans. It was found to be especially true in case of humanoids.

So now I understand why ChatGPT apologizes to me hundreds of times a day.

There is an increasing number of AI programs out there that acknowledge their own errors to improve collaborations with humans. Some systems even commit mistakes on purpose to act more human-like.

Maia, for instance, a unique AI chess program developed at Cornell University, is different from chess bots that are trained to play the best possible moves in any given situation. Unlike DeepMind’s AlphaZero that played chess against itself for hours to achieve superhuman level, Maia was trained to predict the move a human player would make in the given position - which is not necessarily the best move.

Trained on a database of millions of games played by humans, Maia consistently plays the “human move” even when it knows it’s a mistake. Its objective is to mimic the imperfections of human decision-making in chess.

It may not be the most powerful chess program in the world (it certainly would have no chance against AlphaZero), but it’s amazing to think about that Maia is an artificial system playing the game in a human way. For this reason, it’s effectively used in chess education, and serves as an example for AI systems that are easier for humans to relate to and interact with, for being able to commit mistakes like humans.

Interestingly, a Yale-led study involving 153 participants found that robots acknowledging their errors can even help human-to-human interactions. The researchers compared the behaviour of 51 groups, each composed of three humans and one robot collaborating on solving a problem. In the groups where the robots expressed their vulnerabilities in various ways including apologizing for their mistakes, people were found to be talking twice as much to one another, and reported a more enjoyable experience. A little bit like having an honest, open-minded person in the room who helps to break down communication barriers, and create a safe and supportive atmosphere.

It is often emphasised in psychology that embracing vulnerability is key to building strong relationships among people. ASIMO appearing vulnerable in his staircase accident suddenly makes him more relatable in the eye of a human being like me.

In fact, probably nothing makes ASIMO more human-like than the fact that he fell off the stairs.

Its engineers could certainly perfect the technology and eventually develop a robot that never in a million times fails to reach the top of the staircase, but that wouldn’t make the robot more humanoid. After all, we humans sometimes fall down the stairs.

(Some sad statistics: around 1,000 people die each year from falling down stairs in the United Kingdom, and 12,000 in the United States.)


Back to Blog
Back to Home