Abstract

This paper proposes a deep learning approach for high frame rate synthetic transmit aperture ultrasound imaging. The complete dataset of synthetic transmit aperture imaging benefits image quality in terms of lateral resolution and contrast at the expense of a low frame rate. To achieve high-frame-rate synthetic transmit aperture imaging, we propose a self-supervised network, i.e., ApodNet, to complete two tasks. (i) The encoder of ApodNet guides the high-frame-rate plane wave transmissions to acquire channel data with a set of optimized binary apodization coefficients. (ii) The decoder of ApodNet recovers the complete dataset from the acquired channel data for the objective of two-way dynamic focusing. The image is finally reconstructed from the recovered dataset with conventional beamforming approach. We train the network with data from a standard tissue-mimicking phantom and validate the network with data from simulations and in-vivo experiments. Different loss functions are validated to determine the optimized ApodNet setup. The results of the simulations and the in-vivo experiments both demonstrate that, with a four-times higher frame rate, the proposed ApodNet setup achieves higher image contrast than other high-frame-rate methods. Furthermore, ApodNet has much shorter computational time for dataset recovery than the compared methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call