BackgroundThe severity of furcation involvement (FI) directly affected tooth prognosis and influenced treatment approaches. However, assessing, diagnosing, and treating molars with FI was complicated by anatomical and morphological variations. Cone-beam computed tomography (CBCT) enhanced diagnostic accuracy for detecting FI and measuring furcation defects. Despite its advantages, the high cost and radiation dose associated with CBCT equipment limited its widespread use. The aim of this study was to evaluate the performance of the Vision Transformer (ViT) in comparison with several commonly used traditional deep learning (DL) models for classifying molars with or without FI on panoramic radiographs.MethodsA total of 1,568 tooth images obtained from 506 panoramic radiographs were used to construct the database and evaluate the models. This study developed and assessed a ViT model for classifying FI from panoramic radiographs, and compared its performance with traditional models, including Multi-Layer Perceptron (MLP), Visual Geometry Group (VGG)Net, and GoogLeNet.ResultsAmong the evaluated models, the ViT model outperformed all others, achieving the highest precision (0.98), recall (0.92), and F1 score (0.95), along with the lowest cross-entropy loss (0.27) and the highest accuracy (92%). ViT also recorded the highest area under the curve (AUC) (98%), outperforming the other models with statistically significant differences (p < 0.05), confirming its enhanced classification capability. The gradient-weighted class activation mapping (Grad-CAM) analysis on the ViT model revealed the key areas of the images that the model focused on during predictions.ConclusionDL algorithms can automatically classify FI using readily accessible panoramic images. These findings demonstrate that ViT outperforms the tested traditional models, highlighting the potential of transformer-based approaches to significantly advance image classification. This approach is also expected to reduce both the radiation dose and the financial burden on patients while simultaneously improving diagnostic precision.
Read full abstract