ObjectiveSubtraction angiographies are calculated using a native and a contrast-enhanced 3D angiography images. This minimizes both bone and metal artifacts and results in a pure image of the vessels. However, carrying out the examination twice means double the radiation dose for the patient. With the help of generative AI, it could be possible to simulate subtraction angiographies from contrast-enhanced 3D angiographies and thus reduce the need for another dose of radiation without a cutback in quality. We implemented this concept by using conditional generative adversarial networks.MethodsWe selected all 3D subtraction angiographies from our PACS system, which had performed between 01/01/2018 and 12/31/2022 and randomly divided them into training, validation, and test sets (66%:17%:17%). We adapted the pix2pix framework to work on 3D data and trained a conditional generative adversarial network with 621 data sets. Additionally, we used 158 data sets for validation and 164 for testing. We evaluated two test sets with (n = 72) and without artifacts (n = 92). Five (blinded) neuroradiologists compared these datasets with the original subtraction dataset. They assessed similarity, subjective image quality, and severity of artifacts.ResultsImage quality and subjective diagnostic accuracy of the virtual subtraction angiographies revealed no significant differences compared to the original 3D angiographies. While bone and movement artifact level were reduced, artifact level caused by metal implants differed from case to case between both angiographies without one group being significant superior to the other.ConclusionConditional generative adversarial networks can be used to simulate subtraction angiographies in clinical practice, however, new artifacts can also appear as a result of this technology.
Read full abstract