Abstract

The diagnosis of oral squamous cell carcinoma or oral leukoplakia and the presence or absence of oral epithelial dysplasia is carried by pathologists. In recent years, deep learning has been presented to deal with the automated detection of various pathologies using digital images. One of the main limitations to applying deep learning to histopathological images is the lack of public datasets. In order to fill this gap, a joint effort has been made and a new dataset of histopathological images of oral cancer, named P-NDB-UFES, has been collected, annotated, and analyzed by oral pathologists generating the gold-standard for classification. This dataset is composed of 3763 images of patches of histopathological images with oral squamous cell carcinoma (29%), dysplasia (51.29%), and without dysplasia (18.79%). Next, convolutional neural network (CNN), transformers neural networks, and few-shot learning approaches (i.e., Siamese, Triplet, and ProtoNet) were investigated to classify oral squamous cell carcinoma and the presence or absence of oral dysplasia. Experimental results indicate that the CNNs and transformers models, in general, have no statistically significant difference, with only DenseNet-121 outperforming transformers at a balanced accuracy (BCC) of 91.91%, recall, and precision of 91.93%. Few-shot learning methods were inferior when compared to other methods, with different configurations having statistical differences among themselves. For ProtoNet architectures, the usage of hyperbolic space showed to have a similar behavior to Euclidean distance, however, these results were heavily influenced by the optimizer used.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call