News
Architecture and Hardware News

Robots Like Us

Thanks to new research initiatives, autonomous humanoid robots are inching closer to reality.
Posted
  1. Introduction
  2. Using Psychological Methods
  3. Further Reading
  4. Author
  5. Figures
Honda's ASIMO robot with a thermos
Honda's ASIMO uses an array of sensors to gauge the shape of an object, then responds with a sequence of appropriate actions.

The science fiction robots of our youth always looked reassuringly familiar. From the Iron Giant to C-3PO, they almost invariably sported two arms, a torso, and some type of metallic head.

Real-world robots, alas, have largely failed to live up to those expectations. Roombas, factory assembly arms, and even planetary rovers all seem a far cry from the Asimovian androids we had once imagined.

That may be about to change, however, thanks to a series of engineering advances and manufacturing economies of scale that could eventually bring autonomous humanoid robots into our everyday lives. Before Rosie the Robot starts clearing away the dinner dishes, however, roboticists still need to overcome several major technical and conceptual hurdles.

In June 2011, U.S. President Obama announced the National Robotics Initiative, a $70 million effort to fund the development of robots “that work beside, or cooperatively with people.” The government sees a wide range of potential applications for human-friendly robots, including manufacturing, space exploration, and scientific research.

While roboticists have already made major strides in mechanical engineering—designing machines capable of walking on two legs, picking up objects, and navigating unfamiliar terrain—humanoid robots still lack the sophistication to work independently in unfamiliar conditions. With the aim of making robots that can function more autonomously, some researchers are honing their strategies to help robots sense and respond to changing environments.

Honda’s well-known ASIMO robot employs a sophisticated array of sensors to detect and respond to external conditions. When ASIMO tries to open a thermos, for example, it uses its sensors to gauge the shape of the object, then chooses a sequence of actions appropriate to that category of object. ASIMO can also sense the contours of the surface under its feet and adjust its gait accordingly.

Negotiating relationships with the physical world may seem difficult enough, but those challenges pale in comparison to the problem of interacting with some of nature’s most unpredictable variables: namely, human beings.

“People and robots don’t seem to mix,” says Tony Belpaeme, reader in Intelligent Systems at the University of Plymouth, whose team is exploring new models for human-robot interaction.

The problem seems to cut both ways. On the one hand, robots still have trouble assessing and responding to human beings’ often unpredictable behavior; on the other hand, human beings often feel uncomfortable around robots.

Roboticists call this latter phenomenon the “uncanny valley”—the sense of deep unease that often sets in when human beings try to negotiate relationships with human-looking machines. The more realistic the machine, it seems, the more uncomfortable we become.

“Robots occupy this strange category where they are clearly machines, but people relate to them differently,” says Bill Smart, associate professor of computer science and engineering at Washington University in St. Louis. “They don’t have free will or anima, but they seem to.”

Perhaps no robot has ever looked so uncannily human as the Geminoids, originally designed by Osaka University professor Hiroshi Ishiguro in collaboration with the Japanese firm Kokoro. To date the team has developed three generations of successively more life-like Geminoids, each modeled after an actual human being.

The latest Geminoid is a hyper-realistic facsimile of associate professor Henrik Scharfe of Aalborg University, a recent collaborator on the project. With its Madame Tussaud-like attention to facial detail, the machine is a dead ringer for Scharfe. But the Geminoid’s movement and behavior, while meticulously calibrated, remains unmistakably, well, robotic.

In an effort to create more fluid human-like interactions, some researchers are starting to look for new perspectives beyond the familiar domains of computer science and mechanical engineering, to incorporate learning from psychology, sociology, and the arts.

Belpaeme’s CONCEPT project aspires to create robots capable of making realistic facial expressions by embracing what he calls “embodied cognition.” For robots to interact effectively with us, he believes, they must learn to mimic the ways that human intelligence is intimately connected with the shape and movements of our bodies.

To that end, Belpaeme’s team is developing machine learning strategies modeled on the formative experiences of children. “Our humanoid robots can be seen as young children that learn, explore, and that are tutored, trained, and taught,” he says. The robots learn that language by interacting directly with people, just as a child would.

In order to make those learning exchanges as smooth as possible, the team came up with an alternative solution to a mechanical face, instead creating a so-called LightHead that relies on a small projector emitting an image through a wide-angle lens onto a semitransparent mask. Using a combination of open-source Python and Blender software, the team found it could generate realistic 3D facial images extremely quickly.

“We are very sensitive to the speed with which people respond and with which the face does things,” Belpaeme says. “A robot needs to bridge the gap between the internal digital world and the analogue of the external world,” he explains. “If a user claps his hands in front of the robot’s face, you expect the robot to blink.”

Back to Top

Using Psychological Methods

Bill Smart’s team at Washington University is taking a different tack to the problem of human-robot interaction, applying psychological methods to explore how human beings react to robots in their midst.

For example, the team has learned that people tend to regard a robot as more intelligent if it seems to pause and look them in the eye. These kinds of subtle gestures can help humans form a reliable mental model of how a robot operates, which in turn can help them grow more comfortable interacting with the machine.

“If we’re going to have robots and have them interact, we need a model of how the robot ‘thinks,'” says Smart. “We need to design cues to help people predict what will happen, to figure out the internal state of the system.”

Smart’s team has also pursued a collaboration with the university’s drama department, and recently hosted a symposium on human-robot theater. “Theater has a lot to say, but it’s hard to tease it out. We don’t speak the same language.”

Actors provide particularly good role models for robots because they are trained to communicate with their bodies. “If you look at a good actor walking across the stage, you can tell what he’s thinking before he ever opens his mouth,” says Smart. For actors, these are largely intuitive processes, which they sometimes find difficult to articulate. The challenge for robotics engineers is to translate those expressive impulses into workable algorithms.


Two related problems: Robots have trouble assessing and responding to humans’ often unpredictable behavior; humans often feel uncomfortable around robots.


In a similar vein, Heather Knight, a doctoral candidate at Carnegie Mellon University’s (CMU’s) Robotics Institute, has pursued a collaboration with the CMU drama department to create socially intelligent robot performances. Lately she has entertained stage audiences with Data, a cartoonish humanoid robot with a stand-up routine that she has honed through observation of working comedians in New York City.

“What’s new about these robots is that they’re embodied,” says Knight. “We immediately judge them in the way we do people. Not just in physical terms, but also in terms of how they move and act.”

In order to help audiences feel more at ease around her robot, she has found that accentuating the physical differences and making the robot even more “robotic” helps people relate to Data more comfortably.

Another sophisticated cartoonish robot named TokoTokoMaru has recently entertained YouTube audiences with its precise rendition of the famously demanding Japanese Noh dance. Created by robot maker Azusa Amino, the robot relies on aluminum and plastic parts with Kondo servomotors in its joints to create its sinuous dance moves.

Azusa had to make the robot as light-weight as possible while maximizing the number of controlled axes to allow for maximal expression. “Robots have fewer joints than humans, and they can’t move as fast,” he explains, “so to mimic the motions of human dance, I made a conscious effort to move many joints in parallel and move the upper body dynamically.”

Beyond the technical challenges, however, Azusa also had to step into the realm of aesthetics to create a “Japanese-style robot” with a strong sense of character. To accentuate its Japaneseness, the robot wields delicate fans in both hands, while its hair blows in a gentle breeze generated by a ducted fan typically used in radio control airplanes.

While cartoonish appearances may help make a certain vaudevillian class of robots more palatable to theater audiences, that approach might wear thin with a robot designed for everyday human interaction. Take away the wig and the comedy routine, after all, and you are still left with a robot struggling to find its way across the uncanny valley.

“There is something about a thing with two arms and a head that triggers certain affective responses in people,” says Smart. “We have to understand how that works to make them less scary.”

Back to Top

Further Reading

Delaunay, F., de Greeff, J., and Belpaeme, T.
A study of a retro-projected robotic face and its effectiveness for gaze reading by humans, Proceeding of the 5th ACM/IEEE international conference on Human-robot interaction, Osaka, Japan, March 2–5, 2010.

Knight, H.
Eight lessons learned about non-verbal interaction through investigations in Robot, International Conference on Social Robotics, Amsterdam, The Netherlands, Nov. 24–25, 2011.

Morse, A., De Greeff, T., Belpaeme, T., and Cangelosi, A.
Epigenetic Robotics Architecture (ERA), IEEE Transactions on Autonomous Mental Development 2, 4, Dec. 2010.

Park, I., Kim, J., Lee, J., and Oh, J.
Mechanical design of the humanoid robot platform, HUBO, Advanced Robotics 21, 11, Nov. 2007.

Smart, W., Pileggi, A., and Takayama, L.
What do Collaborations with the Arts Have to Say About Human-Robot Interaction? Washington University in St. Louis, April 7, 2010.

Back to Top

Back to Top

Figures

UF1 Figure. The newest Geminoid robot is modeled after project collaborator Henrik Scharfe.

Back to top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More