Abstract
Producing a life-like 3D facial expression is usually a labor-intensive process. In movie and game industries, motion capture and 3D scanning techniques, acquiring motion data from real persons, are used to speed up the production. However, acquiring dynamic and subtle details, such as wrinkles, on a face are still difficult or expensive. In this paper, we propose a feature-point-driven approach to synthesize novel expressions with details. Our work can be divided into two main parts: acquisition of 3D facial details and expression synthesis. 3D facial details are estimated from sample images by a shape-from-shading technique. While employing relation between specific feature points and facial surfaces in prototype images, our system provides an intuitive editing tool to synthesize 3D geometry and corresponding 2D textures or 3D detailed normals of novel expressions. Besides expression editing, the proposed method can also be extended to enhance existing motion capture data with facial details.
Original language | English |
---|---|
Pages | 165-170 |
Number of pages | 6 |
State | Published - 1 Dec 2007 |
Event | 2nd International Conference on Computer Graphics Theory and Applications, GRAPP 2007 - Barcelona, Spain Duration: 8 Mar 2007 → 11 Mar 2007 |
Conference
Conference | 2nd International Conference on Computer Graphics Theory and Applications, GRAPP 2007 |
---|---|
Country | Spain |
City | Barcelona |
Period | 8/03/07 → 11/03/07 |
Keywords
- Facial animation
- Facial expression
- Graphical interfaces
- Surface reconstruction