Abstract

Ultrasound localization microscopy (ULM) overcomes the acoustic diffraction limit by localizing tiny microbubbles (MBs), thus enabling the microvascular to be rendered at sub-wavelength resolution. Nevertheless, to obtain such superior spatial resolution, it is necessary to spend tens of seconds gathering numerous ultrasound (US) frames to accumulate MB events required, resulting in ULM imaging still suffering from trade-offs between imaging quality, data acquisition time and data processing speed. In this paper, we present a new deep learning (DL) framework combining multi-branch CNN and recursive Transformer, termed as ULM-MbCNRT, that is capable of reconstructing a super-resolution image directly from a temporal mean low-resolution image generated by averaging much fewer raw US frames, i.e., implement an ultrafast ULM imaging. To evaluate the performance of ULM-MbCNRT, a series of numerical simulations and in vivo experiments are carried out. Numerical simulation results indicate that ULM-MbCNRT achieves high-quality ULM imaging with ~10-fold reduction in data acquisition time and ~130-fold reduction in computation time compared to the previous DL method (e.g., the modified sub-pixel convolutional neural network, ULM-mSPCN). For the in vivo experiments, when comparing to the ULM-mSPCN, ULM-MbCNRT allows ~37-fold reduction in data acquisition time (~0.8 s) and ~2134-fold reduction in computation time (~0.87 s) without sacrificing spatial resolution. It implies that ultrafast ULM imaging holds promise for observing rapid biological activity in vivo, potentially improving the diagnosis and monitoring of clinical conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call