Abstract

Multimodal learning has gained significant attention in recent years for combining information from different modalities using Deep Neural Networks (DNNs). However, existing approaches often overlook the varying importance of modalities and neglect uncertainty estimation, leading to limited generalization and unreliable predictions. In this paper, we propose a novel algorithm, Dual-level Deep Evidential Fusion (DDEF), to address these challenges by integrating multimodal information at both the Basic Belief Assignment (BBA) level and multimodal level, for enhancing accuracy, robustness, and reliability. The proposed DDEF approach utilizes the Dirichlet framework and BBA methods to connect neural network outputs with Dirichlet distribution parameters, enabling effective uncertainty estimation, and the Dempster-Shafer Theory (DST) is used for dual-level fusion, facilitating the fusion of evidence from two BBA methods and multiple modalities. It has been validated by two experiments on synthetic digit classification, and real-world medical prognosis after brain–computer interface (BCI) treatment, and by demonstrating superior performance compared to existing methods. Our findings emphasize the importance of considering multimodal integration and uncertainty estimation for reliable decision-making in deep learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call