Abstract

Deep fully connected networks are often considered “universal approximators” that are capable of learning any function. Inthisarticle, we utilize this particular property of deep neural networks (DNNs) to estimate normalized cross correlation as a function of spatial lag (i.e., spatial coherence functions) for applications in coherence-based beamforming, specifically short-lag spatial coherence (SLSC) beamforming. We detail the composition, assess the performance, and evaluate the computational efficiency of CohereNet, our custom fully connected DNN, which was trained to estimate the spatial coherence functions of in vivo breast data from 18 unique patients. CohereNet performance was evaluated on in vivo breast data from three additional patients who were not included during training, as well as data from in vivo liver and tissue mimicking phantoms scanned with a variety of ultrasound transducer array geometries and two different ultrasound systems. The mean correlation between the SLSC images computed on a central processing unit (CPU) and the corresponding DNN SLSC images created with CohereNet was 0.93 across the entire test set. The DNN SLSC approach was up to 3.4 times faster than the CPU SLSC approach, with similar computational speed, less variability in computational times, and improved image quality compared with a graphical processing unit (GPU)-based SLSC approach. These results are promising for the application of deep learning to estimate correlation functions derived from ultrasound data in multiple areas of ultrasound imaging and beamforming (e.g., speckle tracking, elastography, and blood flow estimation), possibly replacing GPU-based approaches in low-power, remote, and synchronization-dependent applications.

Highlights

  • D EEP learning has achieved state-of-the-art performance for many imaging tasks, including object detection, image segmentation, and image formation

  • Within each triplet, the deep neural networks (DNNs) short-lag spatial coherence (SLSC) image looks more similar to the central processing unit (CPU) SLSC image than the graphical processing unit (GPU) SLSC image

  • While the current study focused on learning the coherence function only, DNN architectures like CohereNet could be extended to learn more complex operations found in other advanced beamforming algorithms, such as R-SLSC [20] or LW-SLSC [38], which may otherwise be challenging or not feasible to implement in parallel on a GPU

Read more

Summary

Introduction

D EEP learning has achieved state-of-the-art performance for many imaging tasks, including object detection, image segmentation, and image formation. As an alternative to applying physics-based models that assume specific values of this critical speed-of-sound property, recent approaches [1]–[5] use simulated data that incorporate these basic physical principles during training in order to replace the mathematical component of image formation with deep neural networks (DNNs) that learn parameters governing speed-ofsound changes, aberration correction, and other information needed for standard amplitude-based beamforming algorithms (e.g., delay-and-sum (DAS) beamforming). When studying the beamforming process from a robotic tracking perspective, Nair et al [1] used plane wave images to produce segmentation maps directly from the RF channel data, bypassing the beamforming step altogether. Additional ultrasoundrelated deep learning approaches were summarized by van Sloun et al [7]

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call