Abstract

Medical image classification plays a key role in computer-aided detection and diagnosis systems. Conventional machine learning, neural network-based approaches strongly depends on the type of features i.e. texture, edge, shape, and/or blob along with their amalgamations, extracted from the medical image. Since most of the feature extraction algorithms are specific to the modalities and problem. The machine learning or neural network-based model built using these features lack- of generalization and high-level problem representation ability. Deep Learning (DL) methods based on deep features deliver an active way to build an edge-to-edge model which has the ability to reckon the classification labels with the input raw pixels of medical domain images. Due to the unavailability of massive medical data, high noise rate, and resolution of medical images, DL approaches grieve in terms of classification rate. To improve the classification rate, we propose to fuse the contemporary features (high-level features) extracted from a deep convolution neural network (DCNN) and conventional texture features. The building of the planned model comprises the subsequent steps. Firstly, DCNNs (MobileNet, GoogelNet, and Resnet.) are trained as feature extractors, and the feature vectors are extracted. Second, we extract the traditional texture features of medical images. Lastly, we fuse the deep features and texture features which represent robust high-level features for medical image classification. We assess the proposed method on standard medical image benchmark datasets: MEDMNIST We attain classification accuracy of 99.1%,experimental results confirm that fusing texture features and deep learning features results in the increase of accuracy of medical image classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call