This work aims to provide a novel multimodal sarcasm detection model that includes four stages: pre-processing, feature extraction, feature level fusion, and classification. The pre-processing uses multimodal data that includes text, video, and audio. Here, text is pre-processed using tokenization and stemming, video is pre-processed during the face detection phase, and audio is pre-processed using the filtering technique. During the feature extraction stage, such text features as TF-IDF, improved bag of visual words, n-gram, and emojis as well on the video features using improved SLBT, and constraint local model (CLM) are extraction. Similarly the audio features like MFCC, chroma, spectral features, and jitter are extracted. Then, the extracted features are transferred to the feature level fusion stage, wherein an improved multilevel canonical correlation analysis (CCA) fusion technique is performed. The classification is performer using a hybrid classifier (HC), e.g. bidirectional gated recurrent unit (Bi-GRU) and LSTM. The outcomes of Bi-GRU and LSTM are averaged to obtain an effective output. To make the detection results more accurate, the weight of LSTM will be optimally tuned by the proposed opposition learning-based aquila optimization (OLAO) model. The MUStARD dataset is a multimodal video corpus used for automated sarcasm Discovery studies. Finally, the effectiveness of the proposed approach is proved based on various metrics.