The Creative Machines Laboratory at the University of Colombia has developed a humanoid robot head called Emo that can accurately and appropriately simulate facial expressions.
Emo is equipped with 26 sophisticated actuators that can predict and reflect human facial expressions, including smiles, in 840 milliseconds.
It also has a high-resolution camera in each pupil that tracks the eyes of conversation partners.
Emo is also equipped with an artificial intelligence model that can predict and respond to human expressions, learning complex emotions frame by frame from sample videos.
Although Emo currently lacks language interpretation skills, the team’s goal is to integrate it with large language model systems to achieve more natural human interactions in the future.
Emo uses two AI models working together to predict and respond to human facial expressions. The first model is responsible for predicting human emotions and reactions by observing the minute expressions on the target’s face; the second model quickly sends out the corresponding robot facial motor response to simulate the corresponding expression.
EMO can also generate human facial expressions through a self-supervised learning process without the need for human annotations.
This learning method allows EMO to learn the relationship between motor commands and the resulting expression by a human being observing its own reflections in a mirror.
Details:https://engineering.columbia.edu/news/robot-can-you-say-cheese
Video: