Abstract

The purpose of this study was to evaluate the difference in performance of deep-learning (DL) models with respect to the image classes and amount of training data to create an effective DL model for detecting both unilateral cleft alveoli (UCAs) and bilateral cleft alveoli (BCAs) on panoramic radiographs. Model U was created using UCA and normal images, and Model B was created using BCA and normal images. Models C1 and C2 were created using the combined data of UCA, BCA, and normal images. The same number of CAs was used for training Models U, B, and C1, whereas Model C2 was created with a larger amount of data. The performance of all four models was evaluated with the same test data and compared with those of two human observers. The recall values were 0.60, 0.73, 0.80, and 0.88 for Models A, B, C1, and C2, respectively. The results of Model C2 were highest in precision and F-measure (0.98 and 0.92) and almost the same as those of human observers. Significant differences were found in the ratios of detected to undetected CAs of Models U and C1 (p = 0.01), Models U and C2 (p < 0.001), and Models B and C2 (p = 0.036). The DL models trained using both UCA and BCA data (Models C1 and C2) achieved high detection performance. Moreover, the performance of a DL model may depend on the amount of training data.

Highlights

  • Deep learning (DL) techniques with convolutional neural networks (CNN) have often been used for the automatic detection and classification of various oral and maxillofacial diseases on panoramic radiographs, such as radiolucent lesions in the mandible [1], root fractures [2], maxillary sinus lesions [3], and impacted supernumerary teeth [4].Cleft lip and palate is one of the most common congenital anomalies in the maxillofacial region in the Japanese population [5] and is frequently associated with unilateral or bilateral cleft alveolus (CA)

  • Panoramic radiography plays an essential role in evaluating the status of the CA because of its low level of radiation exposure to patients and low cost compared with computed tomography (CT) or cone-beam CT for dental use (CBCT) [7]

  • The total detection sensitivity was quite low for Model A when compared with Model B and the human observers

Read more

Summary

Introduction

Deep learning (DL) techniques with convolutional neural networks (CNN) have often been used for the automatic detection and classification of various oral and maxillofacial diseases on panoramic radiographs, such as radiolucent lesions in the mandible [1], root fractures [2], maxillary sinus lesions [3], and impacted supernumerary teeth [4].Cleft lip and palate is one of the most common congenital anomalies in the maxillofacial region in the Japanese population [5] and is frequently associated with unilateral or bilateral cleft alveolus (CA). CA status can be recognized during a physical examination, oral and maxillofacial radiologists, who have to interpret many panoramic radiographs routinely, cannot always perform such examinations and are forced to diagnose the presence of clefts by the panoramic appearance alone. In such cases, a computer-aided diagnosis/detection system created using DL with CNN would help radiologists, especially those who are inexperienced, to avoid overlooking clefts. If this is not the case, another model should be created based on data including BCA radiographs

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call