Abstract

Portable ultrasound devices with less numbers of transducers become popularly spread soon or later. Accordingly, this work explores how to make quality of ultrasonic B-mode images from 32 channels approaching to that from 128 channels by using Deep Neural Networks (DNNs). Based on the concepts of auto-encoder, context encoder, semi-supervised latent-class learning, and context-conditional Generative Adversarial Network (GAN), we develop the Context Encoder Reconstruction GAN (CER-GAN)., which adopts the contour learning mechanism to establish the relationship between 32-channel and 128-channel Point Spread Functions (PSF) via the mini-batch training. In our experiments, 20 pairs of 32 and 128-channel ultrasound images were used where the way of “leave one person out” was employed for cross-validation. Our CER-GAN automatically encodes 32-channel PSF blocks, and generates pseudo 128-channel PSF ones which are matched with real 128-channel ones to obtain discrimination and amendment from the adversarial mechanism. Finally, these pseudo blocks are used to yield an improved image. The experimental results reveal that the proposed CER-GAN outperforms the conventional DNNs. In Full Width at Half Maximum (FWHM), the minimum distinguishable pixel lengths from the 32-channel, CER-GAN, and 128-channel images are 13.34, 11.15, and 8.62, respectively. In Contrast Noise Ratio (CNR)/PICMUS CNR, the scores of the 32-channel, CER-GAN, and 128-channel images are 0.939/2.381, 1.508/6.502, and 1.422/6.002, respectively. The reason of the favorable scores from CER-GAN is that the background speckle noise is lowered to some extent. Therefore, our CER-GAN can effectively ameliorate the quality of 32-channel ultrasound imaging, close to that of 128-channel one.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call