Abstract

Numerous public networks, namely Instagram, YouTube, Facebook, Twitter, etc., share their own feelings and idea as videotapes, posts, and pictures. In future research, adapting to such data and mining valuable information from it will be an undeniably troublesome errand. This paper proposes a novel audio–video–textual-based multimodal sentiment analysis approach. The proposed approach investigates the sentiments that are collected from the web recordings that utilize audio, video, and textual modalities for further extraction. A feature-level fusion technique is employed in fusing the extracted features from different modalities. Therefore, the extracted features are optimally chosen by using a novel oppositional grass bee optimization (OGBEE) algorithm to obtain the best optimal feature set. Here, 12 benchmark functions are developed to validate the numerical efficiency and the effectiveness of a novel OGBEE algorithm for various aspects. Moreover, our proposed approach utilizes multilayer perceptron-based neural network (MLP-NN) for sentiment classification. The experimental analysis reveals that the proposed approach provides better classification accuracy of about 95.2% with less computational time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call