Abstract

Esophageal cancer (ESCA) is one of the most common cancer types worldwide. Computed tomography (CT) analysis and Whole Slide Image (WSI) analysis are considered two universal standards for sub-type of ESCA classification in clinical practice. However, some scholars have proposed ESCA analysis methods based on deep learning methods, which rely on single modality features, and classification results can be poor. Therefore, deep multi-modality learning is becoming a critical alternative to deep learning in medical tasks. Inspired by clinical practice, we propose an innovative deep multi-modality convolutional neural network architecture using dynamic CT and WSI for classifying sub-types of ESCA. The proposed deep multi-modality learning approach achieves a classification accuracy of 0.9732, which is 0.0188 and 0.0519 higher than the methods that use only one of two imaging modalities. Experimental results show that the proposed model can effectively classify sub-types of ESCA using WSI and CT with high computational efficiency. The proposed paradigm of dynamic CT and WSI can also potentially be applied to other problems in multi-modality medical tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call