Since Karel Capek coined the term "robot" in his 1920 play "Rossum's Universal Robots," robots have been regular fixtures in science fiction. And nowadays, they are becoming science fact. Robots are used to vacuum floors, build cars, deactivate bombs, assist in surgery and help the disabled, among many other functions. They are more prevalent than many of us might think, and they're poised to become even more ubiquitous in the future.
A robot, at its simplest, is a machine that can perform tasks normally undertaken by people. Some are operator controlled and some move autonomously (at least for as long as their power sources will allow). They range in form from single robotic arms to fully humanoid bodies. One of the major goals of some roboticists is to make robots appear more humanlike, at least in part to facilitate more natural interaction between robots and people. A robot whose appearance and actions mimic those of a human being more closely than its metal-skinned counterparts is often called an android.
There are a host of androids in existence that are being used in research today, like Repliee Q2, developed by Hiroshi Ishiguro of Osaka University. Repliee Q2 was modeled on a female TV announcer, and at first glance could be mistaken for a person. She can't walk, though, and doesn't incorporate any sort of complex AI, so her interaction capabilities are limited. Ishiguro has also created a remotely controlled android copy of himself named Geminoid HI-1 that he can use to give lectures from afar. And David Hanson has created an android modeled after Philip K. Dick, author of the novel "Do Androids Dream Electric Sheep?"which incorporates facial recognition and can carry on conversations. Though none have achieved true autonomy, a walking, talking humanlike copy of man almost seems the inevitable conclusion of such efforts. But, for some reason, when we encounter robots that look too much like us, we usually find them eerie or off-putting.
What is it that causes realistic robots to creep us out? Are we afraid of what a being with human capabilities, but without human conscience, might do? Are we afraid of the challenge they represent to our uniqueness, and that they'll end up replacing us? As likely as these reasons sound, given the human-dominating nature of androids in the bulk of science fiction, the most convincing answer to date seems to have a visceral rather than a philosophical cause. It's called the "uncanny valley" effect. Read on to find out more.
Falling Into the Uncanny Valley
We tend to anthropomorphize objects and animals. That is, we project human characteristics like intelligence and emotion onto nonhuman things, especially if they exhibit some human traits. So, you'd think this would mean we would take to a humanlike android more readily than a metal mechanoid. We do apparently feel comfortable with robots that have increasingly human physical attributes up to a point, but once this point is passed, we are repulsed. This effect is called the uncanny valley.
The uncanny valley is a term coined by Masahiro Mori in 1970. To illustrate this idea, Mori created a graph with familiarity on the y-axis and human likeness on the x-axis and plotted our feeling of familiarity, or ability to identify, with various robotic forms or human representations. Industrial robots are near the origin, neither humanlike nor evoking a feeling of familiarity. Humanoid robots approach a peak, being both more familiar and humanlike. But after this peak, there is a sudden drop off into a valley (where things like corpses, zombies and prosthetic hands lie), and it rises again to a second peak as it approaches a living human. In his view, our comfort level increases as robots gain more human physical traits up to an as-yet-undefined point, at which the human traits suddenly render the robot unfamiliar and creepy. Both physical appearance and motion play a role, as unhumanlike motion can also throw something instantly into the valley.
Studies have borne out this idea -- and modified it a bit. Researchers Karl MacDorman, Robert Green, Chin-Chang Ho and Clinton Koch at the Indiana University School of Informatics used still images with the facial features and skin textures altered in various ways to garner participant responses. They found that eeriness levels were rated higher with faces that deviated from normal human proportions when the skin texture was realistic, but that backing off of the skin realism caused eeriness to decrease. These results seem to indicate that a mismatch between proportion and realistic detail might be a culprit.
A study by Ayse Pinar Saygin, Thierry Chaminade, Hiroshi Ishiguro, Jon Driver and Chris Frith used a moving robot (Repliee Q2, actually) to show that the uncanny valley effect might be caused by a disconnect between our expectations and reality in regard to the appearance and movement of an android. The researchers took functional magnetic resonance imaging (fMRIs) of participants as they viewed a series of videos of Repliee Q2, a mechanical-looking robot (the same android, but "skinned" to reveal its underlying parts), and a live human (in fact, the android's model), all performing the same actions. Participants' brains responded to the human and more mechanical looking robot very similarly. But when viewing the more humanlike android, different areas of the brain showed activity than in the other cases, and these areas had to do with connecting the visual cortex with parts of the brain that have to do with affecting and interpreting movement. It provides evidence that maybe the uncanny valley effect is triggered when something that looks mostly human moves in a non-human way (i.e., where appearance and movement do not match the way our brains think they should). Robots moving like we expect robots to move and humans moving like we expect humans to move, of course, do not creep us out.
One possible evolutionary reason for our revulsion to variances in the appearance and movement of an android is that any irregularity in a human form might indicate illness, and we may be hardwired to recoil to prevent the spread of disease. A certain otherness in another person might also trigger our aversion to people who we wouldn't deem as acceptable mating partners. But whatever the underlying biological reason, roboticists are looking for ways to keep their creations out of the valley.
Must We Bridge the Uncanny Valley?
Although it's the goal of some roboticists to make androids that are so humanlike in appearance and motion that they get past the uncanny valley, many are sidestepping the issue by making non-humanlike but very expressive robots. Leonardo is a cute and furry robot made in collaboration between MIT and Stan Winston Studios. He can exhibit various facial expressions, can recognize faces and is being tutored by humans to learn various skills. And researchers such as Heather Knight believe that the social capabilities of the robot may also be key to avoiding the uncanny valley.
There is a school of thought that robots could be made to appear, as well as communicate and interact socially, just enough like people to make us comfortable with them, but not so much that they truly appear human. The idea is to give robots enough features that will make us anthropomorphize them, such as the ability to give and respond to communication cues, to recognize people's emotional states and respond accordingly, and to exhibit personality and emotion (however artificial), among other things. The robots would have their own form, one designed for whatever work they were meant to do, and the disconnect between our expectations and their appearance wouldn't occur. Mori himself even stated in his 1970 article that designers should strive for the first peak in his graph, not the second, to avoid falling into the creepy area. Perhaps this approach would help robots to fit seamlessly into our lives without giving us the heebie-jeebies.
But others continue to strive for total human realism, like Ishiguro, who is among those who believe that androids can bridge the uncanny valley by increasing humanlike appearance and motion. Aside from their realistic hair and skin texture, his Repliee Q2 and Geminoid HI-1 were both also designed to perform common involuntary human micro-movements, like constant body shifting and blinking, as well as breathing, to appear more natural. And they utilize air actuators, with the help of an air compressor, to affect motion without emitting mechanical noises.
Culture may also play a part. In Japan, artificial forms are already more prevalent and accepted than they are in places like the U.S. There have even been a couple of synthetic pop stars (one animated, and one a computer-generated mashup of the features of her real band members). Perhaps the uncanny valley can be traversed in other parts of the world via the increasing prevalence of androids. Maybe we'll all just get used to them.
But this is not a phenomenon that only occurs with robots. It happens with other largely realistic renderings of the human form, such as animations. There were many reports of people finding the animated human characters in the movies "Final Fantasy: The Spirits Within" and "The Polar Express" as creepy or off-putting. Both films were touted for their breakthroughs in computer graphics (CG) photorealism. But the characters weren't real enough to transcend the valley.
We can try everything from decreased realism to full-on human mimicry to further experiment with what forms and functions we are most likely to accept from our robot and computer-generated brethren. We need to either traverse or flat out avoid the uncanny valley, because robots and computer graphics are with us for the long haul.
I, for one, will welcome our robot helpers, be they shiny metal machines or silicone-skinned androids. I am a mediocre housekeeper, sometimes do more harm than good when trying to carry out repairs, and would much rather read a book than have to pay attention to the road while driving to work, so I can see a few good immediate uses for advanced robots. Sure, care needs to be taken when it comes to matters of safety and ethics. In literature and film, robots tend to go haywire and either kill or subjugate their makers. But given that the only household robots currently commercially available are toys and vacuum cleaners, I doubt we'll get to fully sentient humanoid artificial intelligences anytime soon. As far as my grout is concerned, the future can't come soon enough.
- Aldred, Jessica. "From Synthespian to Avatar: Re-framing the Digital Human in Final Fantasy and The Polar Express." Mediascape. Winter 2012. (November 11, 2012) http://www.tft.ucla.edu/mediascape/Winter2011_Avatar.pdf
- Anderson, Alun. "Interview: The shape of android robots to come." New Scientist. July 28, 2007, Volume 195, Issue 2614, Pages 46-47. (November 4, 2012)
- Del Rey, Lester. "Helen O'Loy." Astounding. December 1938.
- Duffy, Brian R. "Anthropomorphism and the social robot." Robotics and Autonomous Systems. March 2003, Volume 42, Issue 3/4, Pages 177-190. (November 13, 2012)
- Gaylord, Chris. "Uncanny Valley: Will we ever learn to live with artificial humans?" Christian Science Monitor. September 14, 2011. (November 4, 2012) http://www.csmonitor.com/Innovation/Tech/2011/0914/Uncanny-Valley-Will-we-ever-learn-to-live-with-artificial-humans
- Giles, Jim. "What puts the creepy into robot crawlies?" New Scientist. October 27, 2007, Volume 196, Issue 2627, Page 32. (November 3, 2012)
- Ishiguro, Hiroshi. "Android science: conscious and subconscious recognition." Connection Science. December 2006, Volume 18, Issue 4, Pages 319-332. (November 4, 2012)
- Ishiguro, Hiroshi. "Scientific Issues Concerning Androids." International Journal of Robotics Research. January 2007, Volume 26, Issue 1, Pages 105-117. (November 4, 2012)
- Landau, Elizabeth. "Why zombies, robots, clowns freak us out." CNN. September 27, 2012. (November 4, 2012) http://www.cnn.com/2012/07/11/health/uncanny-valley-robots/index.html
- MacDorman, Karl F., Robert D. Green, Chin-Chang Ho, and Clinton T. Koch. "Too real for comfort? Uncanny responses to computer generated faces." Computers in Human Behavior. May 2009, Volume 25, Issue 3, Pages 695-710. (November 4, 2012).
- Mori, Masahiro. Translated by Karl F. MacDorman and Takashi Minato. "The Uncanny Valley" Energy. 1970, Volume 7, Issue 4, Pages 33-35. (November 9, 2012) http://www.androidscience.com/theuncannyvalley/proceedings2005/uncannyvalley.html
- MIT. "Leonardo" (November 11, 2012) http://robotic.media.mit.edu/projects/robots/leonardo/overview/overview.html
- Murray, Charles J. "'Marilyn Monrobots' Gain Acceptance Through Humor." Design News. August 2011, Volume 66, Issue 8, Pages 30-32. (November 3, 2012)
- "Robots Creepiest When Most Human-Like (VIDEOS)." Huffington Post. January 31, 2012. (November 9, 2012) http://www.huffingtonpost.com/2012/01/30/robots-creepy_n_1242603.html
- Saygin, Ayse Pinar, Thierry Chaminade, Hiroshi Ishiguro, Jon Driver, and Chris Frith. "The thing that should not be: predictive coding and the uncanny valley in perceiving human and humanoid robot actions." Social Cognitive & Affective Neuroscience. April 2012, Volume 7, Issue 4, Pages 413-422. (November 3, 2012)
- Schaub, Ben. "My Android Twin." New Scientist. October 14, 2006, Pages 42-46. (November 13, 2012)
- Schwartz, Terri. "'Prometheus' viral ad for David 8 features Michael Fassbender as a chilling android." April 17, 2012. (November 12, 2012) http://www.ifc.com/fix/2012/04/prometheus-viral-ad-david-8-michael-fassbender
- Villarreal, Javier Jiménez, and Sara Ljungblad. "Experience Centered Design for a Robotic Eating Aid." HRI'11 conference. ACM. March 7, 2011. (November 11, 2012) http://dl.lirec.org/papers/lbr138-jimenez.pdf
- Walters, Michael L. and Kheng Lee Koay, Dag Sverre Syrdal, Kerstin Dautenhahn, and René to Boekhorst. "Preferences and Perceptions of Robot Appearance and Embodiment in Human-Robot Interaction Trials." LIREC report: Foundations of Embodied Companions (D.6.1). May 30, 2009. (November 11, 2012) http://dl.lirec.org/deliverables/Lirec-D.6.1-Foundations%20of%20embodied%20companions.pdf
- Wilson, Daniel H. "Resident Roboticist" Popular Mechanics. May 2006, Volume 183, Issue 5, Page 26. (November 3, 2012)