Abstract

The multi-modality characteristic of medical images calls for the application of information fusion theory in computer aided diagnosis (CAD) algorithm design. Recently, the research of uncertainty estimation in deep neural networks provides a new perspective for information fusion in deep learning algorithms. For medical image classification tasks, due to the difficulty in collecting large-scale datasets, it is a challenging job in the study of deep learning multi-modality medical image classification model. In this paper, we investigate the fusion method based on the belief/uncertainty estimation framework of evidential deep learning (EDL) and Dempster's rule of combination. We also propose a deep evidential fusion method to best utilize the belief assignment and uncertainty estimation for combining the information of multi-modality medical images when only small-scale and even incomplete multi-modality medical image dataset is available. The proposed method has been tested on two real-world medical image classification tasks. To maximize the use of available medical imaging resources, we extended our model to handle the modality missing problem for multi-modality learning. Experiments show that, with the proposed weighted mass calibration method, our fusion model can handle the modality missing problem in real-world applications, making it possible to incorporate more incomplete data for learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call