Abstract

The aim of this study was to investigate automated feature detection, segmentation, and quantification of common findings in periapical radiographs (PRs) by using deep learning (DL)-based computer vision techniques. Caries, alveolar bone recession, and interradicular radiolucencies were labeled on 206 digital PRs by 3 specialists (2 oral pathologists and 1 endodontist). The PRs were divided into "Training and Validation" and "Test" data sets consisting of 176 and 30 PRs, respectively. Multiple transformations of image data were used as input to deep neural networks during training. Outcomes of existing and purpose-built DL architectures were compared to identify the most suitable architecture for automated analysis. The U-Net architecture and its variant significantly outperformed Xnet and SegNet in all metrics. The overall best performing architecture on the validation data set was "U-Net+Densenet121" (mean intersection over union [mIoU]=0.501; Dice coefficient=0.569). Performance of all architectures degraded on the "Test" data set; "U-Net" delivered the best performance (mIoU=0.402; Dice coefficient=0.453). Interradicular radiolucencies were the most difficult to segment. DL has potential for automated analysis of PRs but warrants further research. Among existing off-the-shelf architectures, U-Net and its variants delivered the best performance. Further performance gains can be obtained via purpose-built architectures and a larger multicentric cohort.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call