A nonparametric regression model for virtual humans generation

Yun Feng Chou, Zen-Chung Shih*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Scopus citations


In this paper, we propose a novel nonparametric regression model to generate virtual humans from still images for the applications of next generation environments (NG). This model automatically synthesizes deformed shapes of characters by using kernel regression with elliptic radial basis functions (ERBFs) and locally weighted regression (LOESS). Kernel regression with ERBFs is used for representing the deformed character shapes and creating lively animated talking faces. For preserving patterns within the shapes, LOESS is applied to fit the details with local control. The results show that our method effectively simulates plausible movements for character animation, including body movement simulation, novel views synthesis, and expressive facial animation synchronized with input speech. Therefore, the proposed model is especially suitable for intelligent multimedia applications in virtual humans generation.

Original languageEnglish
Pages (from-to)163-187
Number of pages25
JournalMultimedia Tools and Applications
Issue number1
StatePublished - 1 Mar 2010


  • Elliptic radial basis functions
  • Functional approximation
  • Image deformation
  • Locally weighted regression
  • Nonparametric regression

Fingerprint Dive into the research topics of 'A nonparametric regression model for virtual humans generation'. Together they form a unique fingerprint.

Cite this