Abstract

This study aimed to estimate human age and gender from panoramic radiographs using various deep learning techniques while using explainability to have a novel hybrid unsupervised model explain the decision-making process. The classification task involved training neural networks and vision transformers on 706 panoramic radiographs using different loss functions and backbone architectures namely ArcFace, a triplet network named TriplePENViT, and the subsequently developed model called PENViT. Pseudo labeling techniques were applied to train the models using unlabeled data. FullGrad Explainable AI was used to gain insights into the decision-making process of the developed PENViT model. The ViT Large 32 model achieved a validation accuracy of 68.21% without ArcFace, demonstrating its effectiveness in the classification task. The PENViT model outperformed other backbones, achieving the same validation accuracy without ArcFace and an improved accuracy of 70.54% with ArcFace. The TriplePENViT model achieved a validation accuracy of 67.44% using hard triplet mining techniques. Pseudo labeling techniques yielded poor performance, with a validation accuracy of 64.34%. Validation accuracy without ArcFace was established at 67.44% for Age and 84.49% for gender. The unsupervised model considered developing tooth buds, tooth proximity and mandibular shape for estimating age within deciduous and mixed dentitions. For ages 20–29, it factored permanent dentition, alveolar bone density, root apices, and third molars. Above 30, it notes occlusal deformity resulting from missing dentition and the temporomandibular joint complex as predictors for age estimation from panoramic radiographs.Graphical abstract

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call