Abstract

Oral cancer is a deadly disease among the most common malignant tumors worldwide, and it has become an increasingly important public health problem in developing and low-to-middle income countries. This study aims to use the convolutional neural network (CNN) deep learning algorithms to develop an automated classification and detection model for oral cancer screening. The study included 700 clinical oral photographs, collected retrospectively from the oral and maxillofacial center, which were divided into 350 images of oral squamous cell carcinoma and 350 images of normal oral mucosa. The classification and detection models were created by using DenseNet121 and faster R-CNN, respectively. Four hundred and ninety images were randomly selected as training data. In addition, 70 and 140 images were assigned as validating and testing data, respectively. The classification accuracy of DenseNet121 model achieved a precision of 99%, a recall of 100%, an F1 score of 99%, a sensitivity of 98.75%, a specificity of 100%, and an area under the receiver operating characteristic curve of 99%. The detection accuracy of a faster R-CNN model achieved a precision of 76.67%, a recall of 82.14%, an F1 score of 79.31%, and an area under the precision-recall curve of 0.79. The DenseNet121 and faster R-CNN algorithm were proved to offer the acceptable potential for classification and detection of cancerous lesions in oral photographic images.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.