Abstract

Due to the wide range of diseases and imaging modalities, a retrieving system is a challenging task to access the corresponding clinical cases from a large medical repository on time. Several computer-aided systems (CADx) are developed to recognize medical imaging modalities (MIM) based on various standard machine learning (SML) and advanced deep learning (DL) algorithms. Pre-trained models like convolutional neural networks (CNN) are used in the past as a transfer learning (TL) architecture. However, it is a challenging task to use these pre-trained models for some unseen datasets with a different domain of features. To classify different medical images, the relevant features with a robust classifier are needed and still, it is unsolved task due to MIM-based features. In this paper, a hybrid MIM-based classification system is developed by integrating the pre-trained VGG-19 and ResNet34 models into the original CNN model. Next, the MIM-DTL model is fine-tuned by updating the weights of new layers as well as weights of original CNN layers. The performance of MIM-DTL is compared with state-of-the-art systems based on cancer imaging archive (TCIA), Kvasir and lower extremity radiographs (LERA) datasets in terms of statistical measures such as accuracy (ACC), sensitivity (SE) and specificity (SP). On average, the MIM-DTL model achieved 99% of ACC, SE of 97.5% and SP of 98% along with smaller epochs compare to other TL. The experimental results show that the MIM-DTL model is outperformed to recognize medical imaging modalities and helps the healthcare experts to identify relevant diseases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call