Abstract

In vivo application of intravascular photoacoustic (IVPA) imaging for coronary arteries is hampered by motion artifacts associated with the cardiac cycle. Gating is a common strategy to mitigate motion artifacts. However, a large amount of diagnostically valuable information might be lost due to one frame per cycle. In this work, we present a deep learning-based method for directly correcting motion artifacts in non-gated IVPA pullback sequences. The raw signal frames are classified into dynamic and static frames by clustering. Then, a neural network named Motion Artifact Correction (MAC)-Net is designed to correct motion in dynamic frames. Given the lack of the ground truth information on the underlying dynamics of coronary arteries, we trained and tested the network using a computer-generated dataset. Based on the results, it has been observed that the trained network can directly correct motion in successive frames while preserving the original structures without discarding any frames. The improvement in the visual effect of the longitudinal view has been demonstrated based on quantitative evaluation of the inter-frame dissimilarity. The comparison results validated the motion-suppression ability of our method comparable to gating and image registration-based non-learning methods, while maintaining the integrity of the pullbacks without image preprocessing. Experimental results from in vivo intravascular ultrasound and optical coherence tomography pullbacks validated the feasibility of our method in the in vivo intracoronary imaging scenario.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call