Abstract
Super-resolution ultrasound (SR-US) imaging improves ultrasound (US) resolution by up to ten-fold. However, translation to the clinical setting has been hindered by long computation times. Conventional algorithms used to detect and localize a microbubble (MB) contrast agent during SR-US image construction suffer from high complexity and computational intensity. Deep learning methods have been used to help implement solutions to these two key SR-US image processing steps. Such developments allow frame processing on the time scale of milliseconds. The goal of this study was to combine a single deep network to both detect and localize MBs for use during SR-US imaging. We propose SRUSnet, which is a fully convolutional network architecture based on MobileNetV3 with enhancements for 2 + 1D input data, fast convergence time, and support for high-resolution data output. The architecture features both a classification and a regression head to provide a flexible level of increased resolution for the output SR-US image. Training was performed with synthetic in silico data computed as a sequence of images with MBs flowing at different rates against a background of tissue. In vitro imaging of a flow phantom perfused with MBs was performed using a programmable US scanner (Vantage 256, Verasonics Inc.) equipped with an L11-4v linear array transducer. The network operating on in silico data exceeded 99% detection accuracy and averaged less than the resolution of a pixel in localization accuracy (i.e. λ/8). The processing time for a 128 × 128-pixel image averaged 25.9 ms on a Nvidia GeForce 2080Ti GPU. Overall, these preliminary results are a promising advance in moving towards a real-time implementation of SR-US imaging.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.