The researches of music emotion recognition usually provide a general emotion classification on the entire scope of music signal and conclude the music- Aroused human emotion. This study presents an approach for analyzing the ingredient of emotions aroused by the music signals. The proposed system integrated variety of emotion models, including two-dimensional emotion space and category type. The model is consisted of four quadrants: Contentment, Depression, Anxious, and Exuberance. Training process for emotion recognition is preceded in a variety of features by 192 music clips to build emotional classification model between each other in order to construct two dimensional analyses of the emotive states. Eleven features are extracted into music and audio categories. Each feature used different length of frame for analysis. The study demarcates the boundaries of four emotions in the emotion plane by support vector machine (SVM) as classification algorithm, and draws the variation of emotion ingredients evoked by musical signals. Furthermore, a questionnaire survey is conducted to compare the results from proposed system analysis the actual listener experience. Preliminary evaluations indicate that the proposed algorithms produce results agreed approximately.