Abstract

PurposeGiven the high level of expertise required for navigation and interpretation of ultrasound images, computational simulations can facilitate the training of such skills in virtual reality. With ray-tracing based simulations, realistic ultrasound images can be generated. However, due to computational constraints for interactivity, image quality typically needs to be compromised.MethodsWe propose herein to bypass any rendering and simulation process at interactive time, by conducting such simulations during a non-time-critical offline stage and then learning image translation from cross-sectional model slices to such simulated frames. We use a generative adversarial framework with a dedicated generator architecture and input feeding scheme, which both substantially improve image quality without increase in network parameters. Integral attenuation maps derived from cross-sectional model slices, texture-friendly strided convolutions, providing stochastic noise and input maps to intermediate layers in order to preserve locality are all shown herein to greatly facilitate such translation task.ResultsGiven several quality metrics, the proposed method with only tissue maps as input is shown to provide comparable or superior results to a state-of-the-art that uses additional images of low-quality ultrasound renderings. An extensive ablation study shows the need and benefits from the individual contributions utilized in this work, based on qualitative examples and quantitative ultrasound similarity metrics. To that end, a local histogram statistics based error metric is proposed and demonstrated for visualization of local dissimilarities between ultrasound images.ConclusionA deep-learning based direct transformation from interactive tissue slices to likeness of high quality renderings allow to obviate any complex rendering process in real-time, which could enable extremely realistic ultrasound simulations on consumer-hardware by moving the time-intensive processes to a one-time, offline, preprocessing data preparation stage that can be performed on dedicated high-end hardware.

Highlights

  • Ultrasound (US) imaging is a real-time, non-invasive and radiation-free imaging modality, making it ideal for computerassisted interventions

  • We propose to learn the rendering of ultrasound images given only cross-sectional model slice / segmentation and integral attenuation maps, the latter of which can be derived from former on-the-fly and helps distill global acoustic energy information locally

  • We propose a generative adversarial network (GAN) for generating ultrasound images from segmentation and integral attenuation maps

Read more

Summary

Introduction

Ultrasound (US) imaging is a real-time, non-invasive and radiation-free imaging modality, making it ideal for computerassisted interventions. In contrast to interpolative US simulation approaches, advanced generative methods [3,15,20,24] allow to generate variety of images with plausible view-dependent artifacts, e.g. for rare pathological cases. These techniques model ultrasonic wave propagation using ray-tracing techniques on anatomical models. Given a 3D anatomical model, ray-based techniques using the state-of-the-art Monte-carlo ray-tracing framework manage to simulate US images with surprisingly high realism at interactive frame rates, as shown in [15,24] for fetal ultrasound imaging. We propose to mimic the simulation with a deep learned model, such that the interactive image simulation only requires a quick inference of such model

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call