Classification comparison of music emotions by multiple training data sets for studying soundscape mood perception

Cheng Kai Hsu, Stone Cheng

Research output: Contribution to conferencePaper

Abstract

This paper presents multiple training model to analyze the inherent affective ingredients in music signals, and applied on the soundscape emotion study. Two sets of training data, popular and classical music clips, are collected respectively. Features of music signals are extracted from the training data to characterize the emotion ingredients. Emotion score counting process simulates how human emotions are being evoked while in music listening. Gaussian mixture model (GMM) demarcates the margins between four emotion states on two-dimensional emotion plane. A graphical interface is established to trace the trajectory of music-induced emotion. Different sets of training data lead to the variation of boundaries in emotion recognition models. Preliminary evaluations by Tchaikovsky "1812 Overture" indicate that the emotion ingredients in the piece is consisted of 19% "Pleasant", 72% "Solemn", 7% " Agitated", and 2% "Exuberant" with pop song based model while in the classical music based model is 22% "Pleasant", 56% "Solemn", 19% "Agitated", and 3% "Exuberant". The mood locus alteration of selected urban soundscapes is conducted by blending with music signals. Simulation results demonstrate the effectiveness of the proposed soundscape emotion control method, which shows that soundscape emotion can be altered by a music factors to control ambient atmosphere.

Original languageEnglish
StatePublished - 1 Jan 2017
Event46th International Congress and Exposition on Noise Control Engineering: Taming Noise and Moving Quiet, INTER-NOISE 2017 - Hong Kong, China
Duration: 27 Aug 201730 Aug 2017

Conference

Conference46th International Congress and Exposition on Noise Control Engineering: Taming Noise and Moving Quiet, INTER-NOISE 2017
CountryChina
CityHong Kong
Period27/08/1730/08/17

Keywords

  • Gaussian mixture model (gmm)
  • Music emotion recognition
  • Soundscape i-ince classification of subjects number(s): 79

Fingerprint Dive into the research topics of 'Classification comparison of music emotions by multiple training data sets for studying soundscape mood perception'. Together they form a unique fingerprint.

  • Cite this

    Hsu, C. K., & Cheng, S. (2017). Classification comparison of music emotions by multiple training data sets for studying soundscape mood perception. Paper presented at 46th International Congress and Exposition on Noise Control Engineering: Taming Noise and Moving Quiet, INTER-NOISE 2017, Hong Kong, China.