Abstract

In this paper, we propose to apply generative adversarial neural networks trained with a cycle consistency loss, or CycleGANs, to improve realism in ultrasound (US) simulation from computed tomography (CT) scans. A ray-casting US simulation approach is used to generate intermediate synthetic images from abdominal CT scans. Then, an unpaired set of these synthetic and real US images is used to train CycleGANs with two alternative architectures for the generator, a U-Net and a ResNet. These networks are finally used to translate ray-casting based simulations into more realistic synthetic US images. Our approach was evaluated both qualitatively and quantitatively. A user study performed by 21 experts in US imaging shows that both networks significantly improve realism with respect to the original ray-casting algorithm ([Formula: see text]), with the ResNet model performing better than the U-Net ([Formula: see text]). Applying CycleGANs allows to obtain better synthetic US images of the abdomen. These results can contribute to reduce the gap between artificially generated and real US scans, which might positively impact in applications such as semi-supervised training of machine learning algorithms and low-cost training of medical doctors and radiologists in US image interpretation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call