Abstract

Detection of cephalometric landmarks has contributed to the analysis of malocclusion during orthodontic diagnosis. Many recent studies involving deep learning have focused on head-to-head comparisons of accuracy in landmark identification between artificial intelligence (AI) and humans. However, a human–AI collaboration for the identification of cephalometric landmarks has not been evaluated. We selected 1193 cephalograms and used them to train the deep anatomical context feature learning (DACFL) model. The number of target landmarks was 41. To evaluate the effect of human–AI collaboration on landmark detection, 10 images were extracted randomly from 100 test images. The experiment included 20 dental students as beginners in landmark localization. The outcomes were determined by measuring the mean radial error (MRE), successful detection rate (SDR), and successful classification rate (SCR). On the dataset, the DACFL model exhibited an average MRE of 1.87 ± 2.04 mm and an average SDR of 73.17% within a 2 mm threshold. Compared with the beginner group, beginner–AI collaboration improved the SDR by 5.33% within a 2 mm threshold and also improved the SCR by 8.38%. Thus, the beginner–AI collaboration was effective in the detection of cephalometric landmarks. Further studies should be performed to demonstrate the benefits of an orthodontist–AI collaboration.

Highlights

  • IntroductionDetection of cephalometric landmarks refers to the localization of anatomical landmarks of the skull and surrounding soft tissues on lateral cephalograms

  • In orthodontics, detection of cephalometric landmarks refers to the localization of anatomical landmarks of the skull and surrounding soft tissues on lateral cephalograms.Since the introduction of lateral cephalograms by Broadbent and Hofrath in 1931, this approach has contributed to the analysis of malocclusion and has become a standardized diagnostic method in orthodontic practice and research [1]

  • The deep anatomical context feature learning (DACFL) model showed an average mean radial error (MRE) of 1.87 ± 2.04 mm (Table 2)

Read more

Summary

Introduction

Detection of cephalometric landmarks refers to the localization of anatomical landmarks of the skull and surrounding soft tissues on lateral cephalograms. Deep learning-based reports using convolutional neural networks (CNNs) have achieved remarkable results [2–5]. Despite the limited number of annotated cephalograms, many CNN-based approaches have been proposed to solve the problem associated with the detection of anatomical landmarks. To address the restricted availability of medical imaging data for network learning with respect to the localization of anatomical landmarks, Zhang et al proposed a two-stage task-oriented deep neural network method [7]. Oh et al proposed the deep anatomical context feature learning (DACFL) model, which employs a Laplace heatmap regression method based on a fully convolutional network. LFP can be considered a data augmentation method based on prior anatomical knowledge It perturbs the local pattern of the cephalogram, forcing the network to seek relevant features globally. Among the previous CNN models, DACFL outperformed other state-of-the-art methods and achieved high performance in landmark identification on the IEEE ISBI 2015 dataset [2]. We used a private dataset to evaluate the performance of the DACFL model in clinical applications

Data Preparation
Manual Identification of Cephalometric Landmarks
Network Architecture and Implementation Details
Evaluation Matrices
Statistical Analysis
Mean Radial Error
Successful Detection Rate
Mean Radial Error and Successful Detection Rate
Benefit of beginner–AI in the detection of cephalometric landmarks
Successful Classification Rate
Comparison
Performance
Impact of AI-Based Assistance on the Performance of Beginners in Cephalometric
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.