Realistic 3D facial animation parameters from mirror-reflected multi-view video

I-Chen Lin*, Jeng Sheng Yeh, Ming Ouhyoung

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

11 Scopus citations

Abstract

In this paper, a robust, accurate and inexpensive approach to estimate 3D facial motion from multi-view video is proposed, where two mirrors located near one's cheeks can reflect the side views of markers on one's face. Nice properties of mirrored images are utilized to simplify the proposed tracking algorithm significantly, while a Kalman filter is employed to reduce the noise and to predict the occluded markers positions. More than 50 markers on one's face are continuously tracked at 30 frames per second. The estimated 3D facial motion data has been practically applied to our facial animation system. In addition, the dataset of facial motion can also be applied to the analysis of co-articulation effects, facial expressions, and audio-visual hybrid recognition system.

Original languageEnglish
Pages2-11
Number of pages10
DOIs
StatePublished - 1 Dec 2001
Event14th Conference on Computer Animation - Seoul, Japan
Duration: 7 Nov 20018 Nov 2001

Conference

Conference14th Conference on Computer Animation
CountryJapan
CitySeoul
Period7/11/018/11/01

Fingerprint Dive into the research topics of 'Realistic 3D facial animation parameters from mirror-reflected multi-view video'. Together they form a unique fingerprint.

Cite this