Representações distribuídas de texto aplicadas em análise de sentimento de mensagens curtas e ruidosas
Abstract
The evolution of the Internet and the Web has given rise to a vast amount of text messages containing opinions. Although the importance of sentiment analysis has grown proportionately, the use of the traditional bag of words as a way to represent these messages computationally imposes serious limitations: the number of dimensions in the samples may be very high; information about the relative position of the words in the text is lost; the relation of synonymy is not captured, and no distinction is made between the different meanings of ambiguous words. Short messages, such as those posted on social media and instant messaging applications, often contain a lot of slang, abbreviations, phonetic spelling and emoticons, which aggravates the problem of computational representation. Lexical normalization techniques and semantic indexing, traditionally used to deal with these problems, depend on dictionaries and their maintenance is impractical given the speed of language evolution. Distributed text representations, which represent each word by a low dimensional vector, have the potential to bypass some of these shortcomings by capturing the similarity relationship among words, storing information about the contexts of their occurrence. Recent techniques have made it possible to obtain these vectors from the weights of an artificial neural network, which are optimized to maximize the probability of the contexts in which the word is observed. Later optimizations made it possible to generate these models with a much larger corpus, thus raising interest in these techniques. This work investigated and proved the hypothesis that the use of distributed text models overcomes the problems and disadvantages of the use bag of words in sentiment analysis in short and noisy messages, making it possible to dispense with the need for traditional lexical normalization techniques and semantic indexing, maintaining predictive power and reducing computational effort.