3D Active Appearance Model alignment using intensity and range data

Andreas Dopfer, Hao Hsueh Wang, Chieh-Chih Wang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

Active Appearance Models (AAMs) are widely used to match a shape and appearance model to an image. This paper extends the commonly used 2D shape model to 3D, and introduces an effective method for integrating alignment to RGB and 3D range images. The use of a three dimensional model allows accurate estimation of head orientation, shape and position. Existing approaches combining range and intensity data use a manually tuned weighting function to balance 2D and 3D alignments. We develop a method to guide the alignment based on the observed image properties and the sensor characteristics. Our approach is experimentally validated using two different sets of depth and RGB cameras. In our experiments we achieve stable alignment under wide angular head rotations of up to 80 with a maximum improvement of 26% compared to the 3D AAM using intensity image and 30% improvement over the state-of-the-art 3DMM methods in terms of 3D head pose estimation.

Original languageEnglish
Pages (from-to)168-176
Number of pages9
JournalRobotics and Autonomous Systems
Volume62
Issue number2
DOIs
StatePublished - 1 Feb 2014

Keywords

  • 3DAAM
  • Active Appearance Model
  • Face alignment
  • Head pose estimation
  • Human-robot interaction
  • RGBD

Fingerprint Dive into the research topics of '3D Active Appearance Model alignment using intensity and range data'. Together they form a unique fingerprint.

Cite this