Abstract

The aim of this work was to assemble alarge annotated dataset of bitewing radiographs and to use convolutional neural networks to automate the detection of dental caries in bitewing radiographs with human-level performance. A dataset of 3989 bitewing radiographs was created, and 7257 carious lesions were annotated using minimal bounding boxes. The dataset was then divided into 3 parts for the training (70%), validation (15%), and testing (15%) of multiple object detection convolutional neural networks (CNN). The tested CNN architectures included YOLOv5, Faster R-CNN, RetinaNet, and EfficientDet. To further improve the detection performance, model ensembling was used, and nested predictions were removed during post-processing. The models were compared in terms of the [Formula: see text] score and average precision (AP) with various thresholds of the intersection over union (IoU). The twelve tested architectures had [Formula: see text] scores of 0.72-0.76. Their performance was improved by ensembling which increased the [Formula: see text] score to 0.79-0.80. The best-performing ensemble detected caries with the precision of 0.83, recall of 0.77, [Formula: see text], and AP of 0.86 at IoU=0.5. Small carious lesions were predicted with slightly lower accuracy (AP 0.82) than medium or large lesions (AP 0.88). The trained ensemble of object detection CNNs detected caries with satisfactory accuracy and performed at least as well as experienced dentists (see companion paper, Part II). The performance on small lesions was likely limited by inconsistencies in the training dataset. Caries can be automatically detected using convolutional neural networks. However, detecting incipient carious lesions remains challenging.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call