Abstract

Automated analysis of the gastric lesions in endoscopy videos is a challenging task and dynamics of the gastrointestinal environment make it even more difficult. In computer-aided diagnosis, gastric images are analyzed by visual descriptors. Various Deep Convolutional Neural Network (DCNN) models are available for representation learning and classification. In this paper, a computer aided diagnosis system is presented for the classification of abnormalities in Videos Endoscopy (VE) images based on Deep Gray-Level Co-occurrence Matrix (DeepGLCM) texture features. In our scheme, the convolutional layers of an already trained model are employed for acquisition of the statistical features from responses of filters to estimate the texture representation of VE frames. A learning model is trained on these features for gastric frames classification. The results obtained by using public datasets of endoscopy images to calculate the performance of the proposed method. In addition, we also use a private endoscopy dataset which is acquired from the University of Aveiro. The DeepGLCM outperforms by achieving the average accuracy of ≈92% and 0.96 area under the curve (AUC) for the chromoendoscopy (CH) dataset and ≈85% accuracy for Confocal Laser Endomicroscopy (CLE) and white light video endoscopy datasets. It is evident that the DeepGLCM texture features provide a better representation than the traditional texture extraction methods by efficiently dealing with variance in images due to different imaging technologies.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.