Abstract

Ultrasound is an essential imaging modality in clinical screening and diagnosis, for reducing morbidity and improving quality of life. Successfully performing ultrasound imaging, however, requires extensive training and expertise in navigating a hand-held probe to a correct anatomical location as well as subsequently interpreting the acquired image. Computer-generated simulations can offer a safe, flexible, and standardized environment to train such skills. Data-based simulations display interpolated slices from a-priori-acquired real ultrasound volumes, whereas generative simulations aim to reproduce the complex ultrasound interactions with comprehensive, geometric anatomical models, such as using ray-tracing to mimic acoustic propagation. Although sonographers typically focus on relatively smaller structures of interest in ultrasound images, the fidelity of the background anatomy may still play a role in contributing to the realism of a generated US image; e.g. when imaging a relatively smaller fetus within large abdominal background. It was proposed earlier to compose ray-traced images with acquired volumes in a preprocessing step. Despite its simplicity, this prevents any view-dependent artifacts and interactive model changes, such as those induced by animations, which can, for instance, model fetal motion. To fully leverage the flexibility of the model-based generative approach, we propose herein an on-the-fly image fusion, based on the two techniques, by moving the interpolation stage within the ray-tracer, such that the pre-acquired image data can be referred to in the background, while the acoustic interactions with the model can be resolved in the foreground. This allows for animated anatomical models, which we realize during simulation runtime via scene-hierarchy subtree switching between precomputed acceleration structure graphs. We demonstrate our proposed techniques on ultrasound sequences of fetal and heart motion, where only animated models can afford to meet realism requirements entailed by the temporal domain.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call