Aprendizado multilíngue e multimodal para o português do Brasil
Abstract
This work explores the domain of multimodal machine translation, which is a process that combines information from different modalities – such as text, images, and audio – to perform translations between languages. Thus, this task aims to analyze the impact of various types of information associated with text and images. Building upon the Visual Translation Language Modelling framework (CAGLAYAN et al., 2021), we enhanced its capabilities to handle other language pairs and more complex scenarios related to the image-text relationship (SATO; CASELI; SPECIA, 2022). To evaluate the model’s generalization ability, we used the multimodal and multilingual corpus How2 (SANABRIA et al., 2018), which includes videos with English subtitles and crowdsourced Portuguese translations. Furthermore, considering that masking (i.e., the process of hiding visual or linguistic tokens during training) can enhance model understanding, as it requires predicting the hidden tokens based on the surrounding context, new masking strategies were proposed considering specific linguistic patterns and different semantic categories (SATO; CASELI; SPECIA, 2023). Extensive experiments in the Portuguese-English multimodal machine translation task demonstrate the effectiveness of the more informed masking techniques. In particular, we found that selective masking related to the ’person’ category significantly improves performance, indicating its crucial role in interpreting visual information. These findings offer insights into the model’s behavior and contribute to the development of more effective masking approaches in multimodal machine translation. Finally, it is worth noting that the approach proposed in this work achieved state-of-the-art results on the How2 dataset (53.1 BLEU) and provided valuable information about the interaction between images and texts in translation systems.
Collections
The following license files are associated with this item: