Abstract

Ultrasound (US) imaging has been widely used in image-guided needle/seed placement in prostate brachytherapy. For 3D US images with large slice thickness, high frequency information in the slice direction is missing and cannot be resolved through interpolation. As an ill-posed problem, most high-resolution generation methods rely on the presence of external/training atlases to learn the transform from low resolution to high resolution images. The anatomical variations among individuals bring unavoidable uncertainties in this atlas-based learning mechanism. In this study, we propose a self-supervised learning method, which does not use any external high-resolution atlas images, yet can resolve high resolution images from the 3D images with a large slice thickness.In 3D US imaging, in-plane spatial resolution is generally much higher than through-plane resolution. Considering the natural isotropy of US imaging on biological tissue, for a certain patient, the mapping from low-resolution to high-resolution images can be established by learning the mapping from down-sampled in-plane images (low-resolution) to original in-plane US images (high-resolution), which enables the generation of high-resolution through-plane images. Two independent cycle consistent generative adversarial networks (CycleGANs) are trained with paired original in-plane US images and two sets of in-plane down-sampled US images. Finally, high-resolution 3D US images are reconstructed by combining the generated 2D images obtained via feeding through-plane images into the two CycleGAN models. A validation study was conducted to quantitatively assess the proposed method using 3D high in-/through-plane resolution breast US images of 50 breast cancer patients. The feasibility of the proposed method for obtaining high-resolution US images for prostate brachytherapy was tested using 3D US images with 2-mm slice thickness from 45 prostate cancer patients.In the breast US images validation test using a three-times spatial resolution enhancement, the proposed method achieved the mean absolute error value of 0.90 (range 0.57-1.35), the peak signal-to-noise ratio value of 37.88 (range 35.84-40.48) dB, and the visual information fidelity value of 0.69 (range 0.68-0.72), which significantly outperforms bicubic interpolation. For the prostate cases, the proposed method outperformed the bicubic interpolation for the spatial resolution enhancement factors of 5 and 10.A novel deep learning-based algorithm for reconstructing high-resolution 3D US images from sparely acquired 2D images was developed and validated. Significant improvement on through-plane resolution has been achieved by using the acquired 2D images without any external atlas images. Its self-supervision capability could accelerate high-resolution US imaging for prostate brachytherapy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call