Tracking the real-time emotional locus of music signals

Stone Cheng*, Jun Jie Fu, Liwei Lin

*Corresponding author for this work

研究成果: Conference contribution同行評審

摘要

This paper proposes a sequential framework that progressively extracts the features of music and characterizes music-induced emotions in a predetermined emotion plane to trace the real-time emotion locus of music. To build-up the emotion plane, 192 clips of emotion-predefined music are used to train the system. Five feature sets, including onset intensity, timbre, sound volume, mode and dissonance are extracted from WAV file to represent the characteristics of a music clip. Feature-weighted scoring algorithms continuously mark the feature-related emotion locus on the emotion plane. A Gaussian mixture model (GMM) is used to demarcate the boundaries of "Exuberance", "Contentment", "Anxious", and "Depression" on the emotion plane for trained music data. A graphic interface of emotion arousal locus on two-dimensional model of mood is established to represent the tracking of dynamic emotional transition caused by music. Preliminary evaluation of the system by testing music draws the locus of emotions evoked by music audio signals.

原文English
主出版物標題40th International Congress and Exposition on Noise Control Engineering 2011, INTER-NOISE 2011
頁面3172-3177
頁數6
出版狀態Published - 1 十二月 2011
事件40th International Congress and Exposition on Noise Control Engineering 2011, INTER-NOISE 2011 - Osaka, Japan
持續時間: 4 九月 20117 九月 2011

出版系列

名字40th International Congress and Exposition on Noise Control Engineering 2011, INTER-NOISE 2011
4

Conference

Conference40th International Congress and Exposition on Noise Control Engineering 2011, INTER-NOISE 2011
國家Japan
城市Osaka
期間4/09/117/09/11

指紋 深入研究「Tracking the real-time emotional locus of music signals」主題。共同形成了獨特的指紋。

引用此