Intention learning from human demonstration

Hoa Yu Chan*, Kuu-Young Young, Hsin Chia Fu

*Corresponding author for this work

Research output: Contribution to journalArticle

4 Scopus citations

Abstract

Equipped with better sensing and learning capabilities, robots nowadays are meant to perform versatile tasks. To remove the load of detailed analysis and programming from the engineer, a concept has been proposed that the robot may learn how to execute the task from human demonstration by itself. Following the idea, in this paper, we propose an approach for the robot to leam the intention of the demonstrator from the resultant trajectory during task execution. The proposed approach identifies the portions of the trajectory that correspond to delicate and skillful maneuvering. Those portions, referred to as motion features, may implicate the intention of the demonstrator. As the trajectory may result from so many possible intentions, it poses a severe challenge on finding the correct ones. We first formulate the problem into a realizable mathematical form and then employ the method of dynamic programming for the search. Experiments based on the pouring and also fruit jam tasks are performed to demonstrate the proposed approach, in which the derived intention is used to execute the same task under different experimental settings.

Original languageEnglish
Pages (from-to)1123-1136
Number of pages14
JournalJournal of Information Science and Engineering
Volume27
Issue number3
DOIs
StatePublished - 1 May 2011

Keywords

  • Human demonstration
  • Intention learning
  • Motion feature
  • Robot imitation
  • Skill transfer

Fingerprint Dive into the research topics of 'Intention learning from human demonstration'. Together they form a unique fingerprint.

  • Cite this