Project Details
Overview
 
 
 
Project description 

The object of the project is to develop an interactive 3D animated talking head, using an audio/video input sequence, to improve the quality of audio-visual communications at very low bit-rates. The final target could be the future, digital TV-set. The system will be able to 'recognize' facial expressions of a user sitting in front of the camera, and use this information to control a remote animation model that will mimic, with life-like realism, the expressions, movements and speech of the user. The Project activities are basically targeted to (1) the automatic face detection and tracking, (2) the automatic expression analysis, including both facial feature extraction, tracking and representation, and expression recognition, that automatically discriminates among subtly different facial expressions based on physical modeling of the facial tissue and muscles. (3) audio-visual speech perception, including efficient analyses of articulatory movements, robust lip tracking, visual feature extraction, noise-robust acoustic feature extraction, and sensor integration (audio, video). Our approach places emphasis on understanding stimulus information and construing speech perception as continuous with perception of other natural (facial expressions) events, and (4) Kinematics-Based Synthesis (animation) of realistic Talking Head.

Runtime: 1999 - 2002