Abstract

Background/purposeIn lower third molar (LM3) surgery, panoramic radiography (PAN) is important for the initial assessment of the anatomical association between LM3 and the inferior alveolar nerve (IAN). This study aimed to develop a deep learning model for the automated evaluation of the LM3–IAN association on PAN. Further, its performance was compared with that of oral surgeons using original and external datasets. Materials and methodsIn total, 579 panoramic images of LM3 from 384 patients in the original dataset were utilized. The images were divided into 483 images for the training dataset and 96 for the testing dataset at a ratio of 83:17. The external dataset comprising 58 images from an independent institution was used for testing only. The LM3–IAN associations on PAN were categorized into direct or indirect contact based on cone-beam computed tomography (CBCT). The You Only Look Once (YOLO) version 3 algorithm, a fast object detection system, was applied. To increase the amount of training data for deep learning, PAN images were augmented using the rotation and flip techniques. ResultsThe final YOLO model had high accuracy (0.894 in the original dataset and 0.927 in the external dataset), recall (0.925, 0.919), precision (0.891, 0.971), and f1-score (0.908, 0.944). Meanwhile, oral surgeons had lower accuracy (0.628, 0.615), recall (0.821, 0.497), precision (0.607, 0.876), and f1-score (0.698, 0.634). ConclusionThe YOLO-driven deep learning model can help oral surgeons in the decision-making process of applying additional CBCT to confirm the LM3–IAN association based on PAN images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call