Abstract

High-frame-rate ultrasound (HiFRUS) has seen recent interest in resolving highly dynamic spatiotemporal events but transferring the large amount of data generated from the probe remains a hurdle for real-time imaging. One method to lessen the data traffic while preserving the field of view is to reduce the channel count, but this can lead to image quality degradation and the appearance of grating lobe artifacts. In this work, we present a convolutional neural network (CNN) based framework that uses a sparse array (half the channel count) and infers the remaining channels to mimic a fully-populated array. On unfocused transmissions, our results show that on a beamformed image of a multiple point target phantom, grating lobe artifacts are reduced from over 8dB (sparse array) to less than 1 dB (CNN interpolated) when compared to an image beamformed using the full array. Additionally, reconstructions from CNN generated data demonstrated improvement (10 dB) in carotid echolucent flow regions in vivo. Our work demonstrates that, using a deep learning approach to channel-domain radiofrequency data interpolation, the required physical channel count on an array and the corresponding data transfer bandwidth can both be reduced without significant image quality trade-off.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.