In this paper, a novel human-robot interaction(HRI) design is proposed where emotional recognition from the speech signal is used to create an emotion-aware music player that can be implemented in an embedded platform. The proposed system maps an inputted short-speech utterance to a two dimensional emotional plane of valence and arousal. This strategy allows the system to automatically select a piece of music from a database of songs, of which emotions are also expressed using arousal and valence values. Furthermore, a cheer-up strategy is proposed such that music songs with varying emotional content are played in order to cheer up the user to a more neutral/happy state. The proposed system has been implemented in a Beagleboard. The online test verified the feasibility of the system. A questionnaire survey shows that 80% of subjects agree with the songs selected by the proposed cheer-up strategy based on the emotional model.