Abstract
The use of multimodal medical imaging is on the rise, both in academic and clinical settings. There was a meteoric growth in the use of multimodal imaging analysis (MIA) with the addition of ensemble learning techniques, which had particular advantages in the medical field. We provide an algorithmic framework that allows supervised MIA and Cross-Modality Fusing at the preprocessing phase algorithms for classification and decision-making levels, drawing inspiration from the current triumphs of deep learning approaches in medical imaging. We presented a method for picture segmentation that makes use of sophisticated convolutional neural networks to identify lesions produced by tumors in soft tissues. To do this, MRI tomography and PET scans are combined to provide multi-modal images. Networks trained with multimodal images outperform their single-modal counterparts. When it relates to tumor segmentation, fusing photos throughout the neural network (i.e., within the convolutional layer or totally connected layers) yields better results than photographs that merge the network’s output. The proposed approach employs four pre-trained models, specifically VGG 19, ResNet 50,SqueezeNet, as well as DenseNet 121. Using a dataset of ISL images, the pre-trained models are fine-tuned. Subsequently, the ensemble learning technique is employed to combine the predictions generated by the three models. Here, ensemble is based on a weighted voting method. Impressive results were obtained with the proposed ensemble method: 98.1% accuracy, 97.5% F1 score, and 90.8% Kappa score. The ensemble method outperformed individual models and existing approaches for multimodal medical fusion and classification, with a Jaccard score of 93.8% and a recall of 98.2% demonstrate its effectiveness for multimodal medical fusion and classification.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.