Abstract

Statement of the problemRecently, Artificial Intelligence (AI) has been applied to a variety of fields in medicine and dentistry, particularly when it comes to radiographic image analysis.1 One subset of AI that has proven most adept at analysis and classification of images is convolutional neural networks (CNNs). CNNs require vast amounts of data or images to be trained to recognize or classify radiographs in disease categories, but traditional types of augmentation do not contribute a great degree of variance to the dataset. Generative adversarial networks (GANs), a recently invented subset of CNNs, have shown the ability to generate brand-new dataset images from pre-existing images.2 Thus, researchers set out to test the hypothesis that a CNN-based binary classifier trained with a dataset containing GAN-generated images would distinguish radicular cyst panoramic radiographs from normal panoramic radiographs with greater accuracy than 1 trained without GAN-generated images. Materials and methodsA dataset of 34 panoramic radiographs of radicular cysts and 34 of normal anatomy were obtained after receiving IRB approval (IRB ID: STUDY00004578). The images were resized to 256 × 256 JPEGs and randomly split into training and testing images with a 0.75:0.25 ratio. The images were then augmented in the following manner for both the radicular cyst and normal panoramic classes: rotations (clockwise and counterclockwise), horizontal flips, and brightness variations. Additionally, a GAN was used to further augment the dataset. In total, after augmentation, there were 408 images for the CNN trained without a GAN and 476 images for the CNN trained with a GAN. The GAN employed was the open-source dcGAN framework. Finally, the images were fed into the open-source pre-trained Inception-Resnet-v2 framework. Methods of data analysisAn ROC curve was generated for the binary classification of an inputted image into either a "normal" category or a "radicular cyst" category. The CNN trained without GAN-generated images had a mean accuracy of 89.3%, and the CNN trained with GAN-generated images had a mean accuracy of 95.1%. Figure 1 shows the ROC curve for both CNNs, with an AUC of 0.893 (standard error of 0.016), sensitivity of 87.4%, and specificity of 87.6% for the CNN trained without a GAN and an AUC of 0.951 (standard error of 0.014), sensitivity of 96.1%, and specificity of 88.3% for the CNN trained with a GAN. ResultsThe AUC difference for the 2 CNNs was 0.058, with a P-value of .0063. Because α was set at 0.05, the AUC difference was statistically significant. ConclusionResearchers were successful in developing a neural network-based classifier that is able to automatically detect whether a panoramic radiograph has a radicular cyst or not, with 95.1% accuracy. Further, a CNN-based binary classifier trained with a dataset containing GAN-generated images was able to distinguish radicular cyst panoramics from normal panoramics with greater accuracy than 1 trained without GAN-generated images. Future work will involve expanding the scope of the classifier to test other lesion types such as dentigerous cysts, ameloblastomas, odontomas, and malignancies of the jaws.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call