Embedded design of an emotion-aware music player

Carlos A. Cervantes, Kai-Tai Song

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Scopus citations

Abstract

In this paper, a novel human-robot interaction(HRI) design is proposed where emotional recognition from the speech signal is used to create an emotion-aware music player that can be implemented in an embedded platform. The proposed system maps an inputted short-speech utterance to a two dimensional emotional plane of valence and arousal. This strategy allows the system to automatically select a piece of music from a database of songs, of which emotions are also expressed using arousal and valence values. Furthermore, a cheer-up strategy is proposed such that music songs with varying emotional content are played in order to cheer up the user to a more neutral/happy state. The proposed system has been implemented in a Beagleboard. The online test verified the feasibility of the system. A questionnaire survey shows that 80% of subjects agree with the songs selected by the proposed cheer-up strategy based on the emotional model.

Original languageEnglish
Title of host publicationProceedings - 2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013
Pages2528-2533
Number of pages6
DOIs
StatePublished - 1 Dec 2013
Event2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013 - Manchester, United Kingdom
Duration: 13 Oct 201316 Oct 2013

Publication series

NameProceedings - 2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013

Conference

Conference2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013
CountryUnited Kingdom
CityManchester
Period13/10/1316/10/13

Keywords

  • Emotion recognition
  • Emotional model
  • Human-robot interaction

Fingerprint Dive into the research topics of 'Embedded design of an emotion-aware music player'. Together they form a unique fingerprint.

Cite this