Abstract

In medical imaging, the detection and classification of stomach diseases are challenging due to the resemblance of different symptoms, image contrast, and complex background. Computer-aided diagnosis (CAD) plays a vital role in the medical imaging field, allowing accurate results to be obtained in minimal time. This article proposes a new hybrid method to detect and classify stomach diseases using endoscopy videos. The proposed methodology comprises seven significant steps: data acquisition, preprocessing of data, transfer learning of deep models, feature extraction, feature selection, hybridization, and classification. We selected two different CNN models (VGG19 and Alexnet) to extract features. We applied transfer learning techniques before using them as feature extractors. We used a genetic algorithm (GA) in feature selection, due to its adaptive nature. We fused selected features of both models using a serial-based approach. Finally, the best features were provided to multiple machine learning classifiers for detection and classification. The proposed approach was evaluated on a personally collected dataset of five classes, including gastritis, ulcer, esophagitis, bleeding, and healthy. We observed that the proposed technique performed superbly on Cubic SVM with 99.8% accuracy. For the authenticity of the proposed technique, we considered these statistical measures: classification accuracy, recall, precision, False Negative Rate (FNR), Area Under the Curve (AUC), and time. In addition, we provided a fair state-of-the-art comparison of our proposed technique with existing techniques that proves its worthiness.

Highlights

  • The stomach is a muscular organ that helps to digest food

  • We selected two different convolutional neural network (CNN) models (VGG19 and Alexnet) to extract features and used transfer learning on the feature vectors before using them as feature extractors

  • The proposed methodology consists of seven steps, such as data acquisition, preprocessing, feature extraction using VGG19 and Alexnet, transfer learning, feature selection using a genetic algorithm, feature fusion, and classification

Read more

Summary

Methods

The proposed methodology consists of seven steps, such as data acquisition, preprocessing, feature extraction using VGG19 and Alexnet, transfer learning, feature selection using a genetic algorithm, feature fusion, and classification. We collected a dataset from a medical specialist. We applied different filters to our collected dataset, available in images. We selected two convolutional neural networks, named Alexnet and VGG19, to extract features. The best thing about these models was that there was no need to develop a system from scratch, as both models were already trained on thousands of Imagenet datasets and could be classified into 1000 different classes. We performed transfer learning on these models that helped in the modification process. Our models were modified and ready to extract features from our dataset. Pathological brain detection based on AlexNet and transfer learning.

Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.