Abstract

Determining the peri-implant marginal bone level on radiographs is challenging because the boundaries of the bones around implants are often unclear or the heights of the buccal and lingual bone levels are different. Therefore, a deep convolutional neural network (CNN) was evaluated for detecting the marginal bone level, top, and apex of implants on dental periapical radiographs. An automated assistant system was proposed for calculating the bone loss percentage and classifying the bone resorption severity. A modified region-based CNN (R-CNN) was trained using transfer learning based on Microsoft Common Objects in Context dataset. Overall, 708 periapical radiographic images were divided into training (n = 508), validation (n = 100), and test (n = 100) datasets. The training dataset was randomly enriched by data augmentation. For evaluation, average precision, average recall, and mean object keypoint similarity (OKS) were calculated, and the mean OKS values of the model and a dental clinician were compared. Using detected keypoints, radiographic bone loss was measured and classified. No statistically significant difference was found between the modified R-CNN model and dental clinician for detecting landmarks around dental implants. The modified R-CNN model can be utilized to measure the radiographic peri-implant bone loss ratio to assess the severity of peri-implantitis.

Highlights

  • The determination of the peri-implant marginal bone level on the conventional radiographs is difficult since the three-dimensional bone shape needs to be comprehended from a two-dimensional image [1,2]

  • A large number of studies have been conducted on convolutional neural network (CNN) for diagnosis using the binary classification of radiographic images, such as pulmonary tuberculosis [5], osteoporosis [6], or periodontal bone loss [7]

  • The value tends closer to 1 as the model’s prediction gets closer to the ground truth. This metric can be used to Intersection over Union (IoU), which is generally used for evaluating object detection tasks

Read more

Summary

Introduction

The determination of the peri-implant marginal bone level on the conventional radiographs is difficult since the three-dimensional bone shape needs to be comprehended from a two-dimensional image [1,2]. Deep learning methods for image recognition, such as convolutional neural networks (CNNs), were improved remarkably after AlexNet [3] won the ImageNet Large Scale Visual Recognition Challenge [4] in 2012. Since deep neural networks successfully classified general images, numerous types of methods have been developed and applied to medical images. A large number of studies have been conducted on CNNs for diagnosis using the binary classification of radiographic images, such as pulmonary tuberculosis [5], osteoporosis [6], or periodontal bone loss [7]. Mask R-CNN, which is a modified architecture of the region-based convolutional neural network

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call