In low- and middle-income countries, oral cancer is becoming more common. One factor delaying the discovery of oral cancer in rural areas is a lack of resources. To stop the disease from spreading, it is essential to quickly obtain information about any cancers. Therefore, it is essential to carry out early identification before it spreads. Primary screening is maintained in this study. Furthermore, deep neural network-based automated methods were used to produce complex patterns to address the challenging issue of assessing oral cancer infection. The goal of this work is to develop an Android application that uses a deep neural network to categorize oral photos into four groups: erythroplakia, leukoplakia, ulcer, and normal mouth. Convolutional neural networks and K-fold validation processes are used in this study’s methodology to create a customized Deep Oral Augmented Model (DOAM). Data augmentation techniques including shearing, scaling, rotation, and flipping are used to pre-process the images. A convolutional neural network is then used to extract features from the images Optimal configurations of max pooling layers, dropout, and activation functions have resulted in the attainment of maximum accuracies. By using the ”ELU” activation function in conjunction with RMSProp as the optimizer, the model achieves 96% validation accuracy, 96% precision, 96% F1 score, and 68% testing accuracy. The model is then deployed in TensorFlow Lite using an Android application.
Read full abstract