Periodontal disease is a widespread global health concern that necessitates an accurate diagnosis for effective treatment. Traditional diagnostic methods based on panoramic radiographs are often limited by subjective evaluation and low-resolution imaging, leading to suboptimal precision. This study presents an approach that integrates Super-Resolution Generative Adversarial Networks (SRGANs) with deep learning-based segmentation models to enhance the segmentation of periodontal bone loss (PBL) areas on panoramic radiographs. By transforming low-resolution images into high-resolution versions, the proposed method reveals critical anatomical details that are essential for precise diagnostics. The effectiveness of this approach was validated using datasets from the Chungbuk National University Hospital and the Kaggle data portal, demonstrating significant improvements in both image resolution and segmentation accuracy. The SRGAN model, evaluated using the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) metrics, achieved a PSNR of 30.10 dB and an SSIM of 0.878, indicating high fidelity in image reconstruction. When applied to semantic segmentation using a U-Net architecture, the enhanced images resulted in a dice similarity coefficient (DSC) of 0.91 and an intersection over union (IoU) of 84.9%, compared with 0.72 DSC and 65.4% IoU for native low-resolution images. These results underscore the potential of SRGAN-enhanced imaging to improve PBL area segmentation and suggest broader applications in medical imaging, where enhanced image clarity is crucial for diagnostic accuracy. This study also highlights the importance of further research to expand the dataset diversity and incorporate clinical validation to fully realize the benefits of super-resolution techniques in medical diagnostics.
Read full abstract