A humanoid robot can predict one second in advance whether someone will smile and match the smile on its own face. The creators hope this technology will make interactions with robots more realistic.
Artificial intelligence is now able to imitate human language to an impressive degree, but interacting with physical robots often falls into the “uncanny valley.” One reason for this is that robots cannot reproduce the complex nonverbal cues and mannerisms that are essential to communication.
now, Hod Lipson Researchers at Columbia University in New York have developed a robot called Emo that uses AI models and high-resolution cameras to predict and attempt to reproduce people's facial expressions. It predicts whether someone will smile about 0.9 seconds in advance and smiles accordingly. “I'm a jaded roboticist, but when I see this robot, I smile back,” Lipson says.
Emo consists of a face with a camera in its eyeball and a flexible plastic skin with 23 individual motors attached by magnets. This robot uses her two neural networks. One looks at people's faces and predicts their expressions, and her other one figures out how to create expressions on her own face.
The first network was trained on YouTube videos of people making faces, while the second network was trained by watching the robot itself make faces on a live camera feed. “You learn what your face looks like when you pull all your muscles,” Lipson says. “It's like being in front of a mirror. Even if you close your eyes and smile, you know what your face looks like.”
Lipson and his team hope Emo's technology will improve human-robot interaction, but first they need to expand the range of expressions robots can perform. Lipson also wants to train his children to express themselves in response to what people say, rather than simply imitating others.
topic:
Source: www.newscientist.com