Abstract

Multimodal data analysis has drawn increasing attention with the explosive growth of multimedia data. Although traditional unimodal data analysis tasks have accumulated abundant labeled datasets, there are few labeled multimodal datasets due to the difficulty and complexity of multimodal data annotation, nor is it easy to directly transfer unimodal knowledge to multimodal data. Unfortunately, there is little related data augmentation work in multimodal domain, especially for image–text data. In this article, to address the scarcity problem of labeled multimodal data, we propose a Multimodal Data Augmentation framework for boosting the performance on multimodal image–text classification task. Our framework learns a cross-modality matching network to select image–text pairs from existing unimodal datasets as the multimodal synthetic dataset, and uses this dataset to enhance the performance of classifiers. We take the multimodal sentiment analysis and multimodal emotion analysis as the experimental tasks and the experimental results show the effectiveness of our framework for boosting the performance on multimodal classification task.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call