Abstract

We are in the middle of a Cambrian explosion. Software beamforming has redefined what can be done with the signal. As a consequence, our field has become flooded with adaptive beamforming (AB) algorithms, methods that by clever manipulation of channel data have exceeded our wildest expectations for the maximum achievable contrast and resolution. Or have they? If we define image quality in terms of the contrast ratio (CR) and the full-width half-maximum (FWHM), there is another way of getting unprecedented image quality. Dynamic range stretching, the kind of stretching one gets from squaring the beamformed signal amplitude, will also produce higher CR and smaller FWHM. If AB alters the output dynamic range, then the reported CR and FWHM are invalid. No tools are available yet for researchers and reviewers to check this. Here we address this problem. We propose a phantom to measure the dynamic range of AB. The phantom includes a speckle gradient band similar to those used in the calibration of monitors. The phantom allows us to confirm that AB algorithms can alter the dynamic range of the signal and produce incorrect CR and FWHM values. But it also makes it possible to compensate for that alteration and calibrate the algorithms. After calibration AB still results in higher image quality than delay-and-sum, but the metrics are more reasonable. A debate must be opened on the significance of AB algorithms. The metrics used to assess image quality must be revised. Otherwise, we risk to walk in circles, tricked by an illusion.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call