Online learning design of an image-based facial expression recognition system

Kai-Tai Song*, Meng Ju Han, Jung Wei Hong

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

In order to serve people and support them in daily life, a domestic or service robot needs to accommodate itself to various individuals. Emotional and intelligent human-robot interaction plays an important role for a robot to gain attention of its users. Facial expression recognition is a key factor in interactive robotic applications. In this paper, an image-based facial expression recognition system that adapts online to a new face is proposed. The main idea of the proposed learning algorithm is to adjust parameters of the support vector machine (SVM) hyperplane for learning facial expressions of a new face. After mapping the input space to Gaussian-kernel space, support vector pursuit learning (SVPL) is employed to retrain the hyperplane in the new feature space. To expedite the retraining step, we propose to retrain a new SVM classifier by using only samples classified incorrectly in previous iteration in combination with critical historical sets. After adjusting the hyperplane parameters, the new classifier will recognize more effectively previous unrecognizable facial datasets. Experiments of using an embedded imaging system show that the proposed system recognizes new facial datasets with a recognition rate of 92. 7%. Furthermore, it also maintains a satisfactory recognition rate of 82. 6% of old facial samples.

Original languageEnglish
Pages (from-to)151-162
Number of pages12
JournalIntelligent Service Robotics
Volume3
Issue number3
DOIs
StatePublished - 1 Jul 2010

Keywords

  • Facial expression recognition
  • Human-robot interaction
  • Incremental learning
  • Support vector pursuit learning

Fingerprint Dive into the research topics of 'Online learning design of an image-based facial expression recognition system'. Together they form a unique fingerprint.

Cite this