Abstract
Advancements in technology have led to the adoption of digital workflows in dentistry, which require the segmentation of regions of interest from cone-beam computed tomography (CBCT) scans. These segmentations assist in diagnosis, treatment planning, and research. However, manual segmentation is an expensive and labor-intensive process. Therefore, automated methods, such as convolutional neural networks (CNNs), provide a more efficient way to generate segmentations from CBCT scans. A three-dimensional UNet-based CNN model, utilizing the Medical Image Segmentation CNN framework, was used for training and generating predictions from CBCT scans. A dataset of 351 CBCT scans, with ground-truth labels created through manual segmentation using AI-assisted segmentation software, was prepared. Data preprocessing, augmentation, and model training were performed, and the performance of the proposed CNN model was analyzed. The CNN model achieved high accuracy in segmenting maxillary and mandibular teeth from CBCT scans, with average Dice Similarity Coefficient values of 91.83% and 91.35% for maxillary and mandibular teeth, respectively. Performance metrics, including Intersection over Union, precision, and recall, further confirmed the model's effectiveness. The study demonstrates the efficacy of the three-dimensional UNet-based CNN model within the Medical Image Segmentation CNN framework for automated segmentation of maxillary and mandibular dentition from CBCT scans. Automated segmentation using CNNs has the potential to deliver accurate and efficient results, offering a significant advantage over traditional segmentation methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.