Learning realistic lip motions for humanoid face robots.
Lip motion represents outsized importance in human communication, capturing nearly half of our visual attention during conversation. Yet anthropomorphic robots often fail to achieve lip-audio synchronization, resulting in clumsy and lifeless lip behaviors. Two fundamental barriers underlay this challenge. First, robotic lips typically lack the mechanical complexity required to reproduce nuanced human mouth movements; second, existing synchronization methods depend on manually predefined movements and rules, restricting adaptability and realism. Here, we present a humanoid robot face designed to overcome these limitations, featuring soft silicone lips actuated by a 10-degree-of-freedom mechanism. To achieve lip synchronization without predefined movements, we used a self-supervised learning pipeline based on a variational autoencoder (VAE) combined with a facial action transformer, enabling the robot to autonomously infer more realistic lip trajectories directly from speech audio. Our experimental results suggest that this method outperforms simple heuristics like amplitude-based baselines in achieving more visually coherent lip-audio synchronization. Furthermore, the learned synchronization successfully generalizes across multiple linguistic contexts, enabling robot speech articulation in 10 languages unseen during training.
Duke Scholars
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Speech
- Robotics
- Movement
- Motion
- Lip
- Learning
- Language
- Humans
- Face
- Equipment Design
Citation
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Speech
- Robotics
- Movement
- Motion
- Lip
- Learning
- Language
- Humans
- Face
- Equipment Design