A language modeling approach to atomic human action recognition

Yu Ming Liang*, Sheng Wen Shih, Arthur Chun Chieh Shih, Hong Yuan Mark Liao, Cheng-Chung Lin

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Scopus citations

Abstract

Visual analysis of human behavior has generated considerable interest in the field of computer vision because it has a wide spectrum of potential applications. Atomic human action recognition is an important part of a human behavior analysis system. In this paper, we propose a language modeling framework for this task. The framework is comprised of two modules: a posture labeling module, and an atomic action learning and recognition module. A posture template selection algorithm is developed based on a modified shape context matching technique. The posture templates form a codebook that is used to convert input posture sequences into training symbol sequences or recognition symbol sequences. Finally, a variable-length Markov model technique is applied to learn and recognize the input symbol sequences of atomic actions. Experiments on real data demonstrate the efficacy of the proposed system.

Original languageEnglish
Title of host publication2007 IEEE 9Th International Workshop on Multimedia Signal Processing, MMSP 2007 - Proceedings
Pages288-291
Number of pages4
DOIs
StatePublished - 1 Dec 2007
Event2007 IEEE 9Th International Workshop on Multimedia Signal Processing, MMSP 2007 - Chania, Crete, Greece
Duration: 1 Oct 20073 Oct 2007

Publication series

Name2007 IEEE 9Th International Workshop on Multimedia Signal Processing, MMSP 2007 - Proceedings

Conference

Conference2007 IEEE 9Th International Workshop on Multimedia Signal Processing, MMSP 2007
CountryGreece
CityChania, Crete
Period1/10/073/10/07

Keywords

  • Human behavior analysis
  • Language modeling
  • Posture template selection
  • Variable-lenth Markov mode

Fingerprint Dive into the research topics of 'A language modeling approach to atomic human action recognition'. Together they form a unique fingerprint.

Cite this