Abstract Background Despite being the gold standard for diagnosing coronary artery disease and guiding percutaneous coronary interventions (PCI), image restoration during invasive coronary angiography (ICA) has rarely been explored. Generative adversarial network (GAN), a deep learning model, has demonstrated its effectiveness in image restoration. Our hypothesis was that we could restore ICA images, damaged or deleted to varying degrees, using the inpainting GAN method, and we undertook efforts to test this hypothesis. Methods We extracted 22,659 ICA patch images (128 × 128 size) from 60 patients, focusing on sparse vessels for model training. To assess our inpainting GAN model, we degraded the extracted images to varying degrees: minimal, mild, moderate, severe, and completely blocked. The performance was quantitatively analyzed using metrics such as the peak signal-to-noise ratio (PSNR), structured similarity indexing methods (SSIM), and mean squared error (MSE). Results The inpainting GAN method significantly improved the image quality in all degraded images. For the blocked images, PSNR and SSIM were enhanced by 161.9% and 14.3%, respectively, with a 98% MSE reduction. In severely, moderately, mildly, and minimally damaged images, PSNR and SSIM increased by 14.9% and 14.8%, 15.7% and 14.7%, 19.3% and 15.6%, and 12.7% and 5.3%, respectively, with MSE reductions of 61.5%, 64.1%, 72.7%, and 33.9%, respectively. Conclusions Inpainting GAN can restore ICA images with varying damage levels, including deletions. Our deep-learning model can instantaneously improve ICA image quality, potentially aiding in emergency or chronic total occlusion PCI.Table 2.Quantitative Analysis of ImageFigure 1.Sample Cases Pre and Post-Rest
Read full abstract