Abstract

Sentiment classification on spoken language transcriptions has received less attention. A practical system employing the spoken language modality will have to use a language transcription from an Automatic Speech Recognition (ASR) engine which is inherently prone to errors. The main interest of this paper lies in improvement of sentiment classification on erroneous ASR transcriptions. Our aim is to improve the representation of the ASR transcripts using the manual transcripts and other modalities, like audio and visual, that are available during training but not necessarily during test conditions. We adopt an approach based on Deep Canonical Correlation Analysis (DCCA) and propose two new extensions of DCCA to enhance the ASR view using multiple modalities. We present a detailed evaluation of the performance of our approach on datasets of opinion videos (CMU-MOSI and CMU-MOSEI) collected from Youtube.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call