Abstract

Analyzing and comparing the multi-spectral images in the healthcare domain is a complex task as it involves features in the visible spectrum and beyond. Using Content-based image retrieval (CBIR) with deep learning in the healthcare domain provides an ease to identify similar images, which eventually helps in analysis. CBIR, with deep learning, removes the explicit definition of features and autonomously determines similarity on multiple attributes like shape, color, and texture. The proposed method uses multiple deep learning architectures fine-tuned by using max-pool overlapping pooling in all networks and placing the same number of decision layers with the same parameters to determine the result based on feature extraction layers. State-of-the-art Neural Network models VGG-16, VGG-19, Xception, InceptionResNetV2, DenseNet201, MobileNetV2, and NASNetLarge were considered for the experiment to identify the most optimal model. First, a classification process is applied to compare the performance of the state-of-the-art networks. Then, the CBIR process is applied using the feature extraction part. The experiment was conducted on the chest X-rays dataset, consisting of 21,165 images with COVID-19, pneumonia, and normal, with and without rotational invariant cases. The VGG-16 model proved to be the most optimal choice for image retrieval and achieved the highest precision of 99% and mAP of 94.34% compared to recent CBIR methods, which also used chest X-rays datasets. Rotational invariant cases were also tested and achieved mean precision of 86%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call