M-motion: um tocador de música digital para plataforma móvel que considera emoções dos usuários
Assunção, Willian Garcias de
MetadataShow full item record
In recent years there has been a breakthrough in the development of technologies that have digitized the listening experience of music. With this, a large collection of music became available on the internet, in addition to the emergence of new automated techniques of music selection. Current music playback systems retrieve music through metadata such as title, author, band, and music genre. Since music can convey information related to emotion, studies show that music is an effective means of emotional induction and can change the emotional behavior of the user. The current methods of music recommendation and playback depending on the emotion requires a manual user interaction and are often suggested from a set of already pre-sorted songs, so this becomes a problem for the user, since most of the time the user has an intensive task in selecting a song according to the desired emotion before a large set of diverse songs and varied styles. Thus, this work aims to propose m-Motion, a musical reproduction tool for smartphones that seeks to help the user achieve a desired emotional state from their current emotional state. The current and desired emotion of the user is obtained by the expression of subjective feelings, informed through the speech of the voice (text) and of an element of user interface inspired in the semantic space of Scherer. The emotion of the user's songs are mapped to a dimensional space of excitation and valence, which are predicted by the classification algorithm Support Vector Machine (SVR). Therefore, this work adopted the semantic space of Scherer to classify the emotions of the songs and the users in terms of excitement and valence. Thus, the m-Motion algorithm makes use of two mathematical formulas, being the Euclidean distance and the equation of the line between the current and desired emotion. With this, the algorithm is able to return a set of songs between the current and desired emotion of the user. An experiment was conducted with three distinct groups of users that aimed to assess the emotion achieved by users after playing a set of suggested songs. The first two groups had 20 users each experiment and therefore each user was collected the current and desired emotional state, as well as the facial expressions during music playback. The first group received suggestions of songs returned by the proposed algorithm, while the second group the songs were manually suggested by the user. The third group contained 8 users and the experiment was carried out outside the controlled environment. Users of this group used the m-Motion application for 5 days and received music suggestions based on the desired emotional state. The results suggest that, when listening to the songs selected by the proposed application, the users can approach the desired emotional state, as well as the manual recommendation made by the user.