Abstract

Automatic tumor segmentation is a critical component in clinical diagnosis and treatment. Although single-modal imaging provides useful information, multi-modal imaging provides a more comprehensive understanding of the tumor. Multi-modal tumor segmentation has been an essential topic in medical image processing. With the remarkable performance of deep learning (DL) methods in medical image analysis, multi-modal tumor segmentation based on DL has attracted significant attention. This study aimed to provide an overview of recent DL-based multi-modal tumor segmentation methods. In in the PubMed and Google Scholar databases, the keywords "multi-modal", "deep learning", and "tumor segmentation" were used to systematically search English articles in the past 5 years. The date range was from 1 January 2018 to 1 June 2023. A total of 78 English articles were reviewed. We introduce public datasets, evaluation methods, and multi-modal data processing. We also summarize common DL network structures, techniques, and multi-modal image fusion methods used in different tumor segmentation tasks. Finally, we conclude this study by presenting perspectives for future research. In multi-modal tumor segmentation tasks, DL technique is a powerful method. With the fusion methods of different modal data, the DL framework can effectively use the characteristics of different modal data to improve the accuracy of tumor segmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call