Abstract

The success of diagnostic and interventional medical procedures is deeply rooted in the ability of modern imaging systems to deliver clear and interpretable information. After raw sensor data is received by ultrasound and photoacoustic imaging systems in particular, the beamforming process is often the first line of software defense against poor quality images. Yet, with today’s state-of-the-art beamformers, ultrasound and photoacoustic images remain challenged by channel noise, reflection artifacts, and acoustic clutter, which combine to complicate segmentation tasks and confuse overall image interpretation. These challenges exist because traditional beamforming and image formation steps are based on flawed assumptions in the presence of significant inter- and intrapatient variations.In this talk, I will introduce the PULSE Lab’s novel alternative to beamforming, which improves ultrasound and photoacoustic image quality by learning from the physics of sound wave propagation. We replace traditional beamforming steps with deep neural networks that only display segmented details, structures, and physical properties of interest. I will then transition to describing a new resource for the entire community to standardize and accelerate research at the intersection of ultrasound beamforming and deep learning. This resource includes the first internationally crowd-sourced database of raw ultrasound channel data and integrated beamforming and evaluation code (see https://cubdl.jhu.edu/ & https://pulselab.jhu.edu/ for more details).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call