The aim of this study was to develop and validate a convolutional neural network (CNN) algorithm for the detection of impacted mandibular third molars in panoramic radiographs and the classification of the surgical extraction difficulty level. A dataset of 1730 panoramic radiographs was collected; 1300 images were allocated to training and 430 to testing. The performance of the model was evaluated using the confusion matrix for multiclass classification, and the actual scores were compared to those of two human experts. The area under the precision–recall curve of the YOLOv5 model ranged from 72% to 89% for the variables in the surgical difficulty index (level of occlusion, available retromolar space, impaction angle, number of roots, root curvature, and relationship to the second molar). The area under the receiver operating characteristic curve showed promising results of the YOLOv5 model for classifying third molars into three surgical difficulty levels (micro-average AUC 87%). Furthermore, the algorithm scores demonstrated good agreement with the human experts. In conclusion, the YOLOv5 model has the potential to accurately detect and classify the position of mandibular third molars, with high performance for every criterion in radiographic images. The proposed model could serve as an aid in improving clinician performance and could be integrated into a screening system.
Read full abstract