Abstract

Deep learning for ultrasound image formation is rapidly garnering research support and attention, quickly rising as the latest frontier in ultrasound image formation, with much promise to balance both image quality and display speed. Despite this promise, one challenge with identifying optimal solutions is the absence of unified evaluation methods and datasets that are not specific to a single research group. This article introduces the largest known international database of ultrasound channel data and describes the associated evaluation methods that were initially developed for the challenge on ultrasound beamforming with deep learning (CUBDL), which was offered as a component of the 2020 IEEE International Ultrasonics Symposium. We summarize the challenge results and present qualitative and quantitative assessments using both the initially closed CUBDL evaluation test dataset (which was crowd-sourced from multiple groups around the world) and additional in vivo breast ultrasound data contributed after the challenge was completed. As an example quantitative assessment, single plane wave images from the CUBDL Task 1 dataset produced a mean generalized contrast-to-noise ratio (gCNR) of 0.67 and a mean lateral resolution of 0.42 mm when formed with delay-and-sum beamforming, compared with a mean gCNR as high as 0.81 and a mean lateral resolution as low as 0.32 mm when formed with networks submitted by the challenge winners. We also describe contributed CUBDL data that may be used for training of future networks. The compiled database includes a total of 576 image acquisition sequences. We additionally introduce a neural-network-based global sound speed estimator implementation that was necessary to fairly evaluate the results obtained with this international database. The integration of CUBDL evaluation methods, evaluation code, network weights from the challenge winners, and all datasets described herein are publicly available (visit https://cubdl.jhu.edu for details).

Highlights

  • S IGNIFICANT research has been dedicated recently to developing methods for deep learning in ultrasound imaging, as summarized in several recent review articles and special issue editorials [1]–[4]

  • Our major challenge outcomes include the largest known international database of raw ultrasound channel data, network descriptions and trained network weights from the challenge on ultrasound beamforming with deep learning (CUBDL) winners, a PyTorch DAS beamformer containing multiple components that can be converted to trainable parameters, a data sheet of phantom sound speeds containing optimal speeds identified by the CUBDL organizers using the PyTorch DAS beamformer, and evaluation code that integrates these major outcomes

  • This article summarizes the results of the CUBDL challenge, as well as the detailed evaluation process implemented by the CUBDL organizers and associated insights gained from the evaluation process and challenge results

Read more

Summary

Introduction

S IGNIFICANT research has been dedicated recently to developing methods for deep learning in ultrasound imaging, as summarized in several recent review articles and special issue editorials [1]–[4]. The merger of deep learning and ultrasound image formation is promising because it has the potential to shed light on features that are not considered by algorithmic approaches that underlie the mathematical, model-based component of image formation, with multiple input-output and training options [5]–[7]. These data-driven deep learning approaches have the potential to be more robust than the traditional model-based beamforming methods, as they do not require parameter changes when switching to different scanners, they are able to generalize across different datasets, and they can infer from advanced beamforming methods in less time than that required to perform the otherwise computationally intensive calculations associated with advanced beamformers [8]–[11]. Such open frameworks are useful for benchmarking and comparing methods against each other, as demonstrated in the fields of visual recognition [12] and computed tomography [13]

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.