Abstract

ABSTRACT Medical image annotation has significant potential to detect multiple tags. To detect specific tags and labels, most of the conventional learning algorithms took difficulty in matching the tag with corresponding region in the medical images. Hence, most of the medical image annotations fail to reproduce discrimination between various classes. In this research work to tackle medical image discrimination, we proposed a lightweight pyramidal feature-specific learning deep learning network. The proposed pyramidal feature-specific Lightweight Convolutional Neural Network (LDCNN) architecture for classifying local visual regions to annotate the classified region. By employing pyramidal learning, the proposed LDCNN align each medical image annotation with its region. The colour conversion makes low computation complexity, since the medical image exhibits degeneration by colour absorption. Hence, the interpretability and classification effectiveness increase. To evaluate the effectiveness and classification accuracy, we compare the proposed LDCNN with AlexNet and EfficientNet on benchmark datasets like MS-COCO, LC25000 and multiclass Kather datasets. Empirical experimental performance index obtained by the proposed LDCNN outperforms baseline convolutional neural network architecture. The proposed LDCNN achieves 99.6% accuracy, 98.4% sensitivity, 97.9% specificity and 99.1% F1 score. Hence, there is a steady improvement in medical annotation efforts and classification by our proposed lightweight feature-specific learning network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call