Abstract

For clinical medical diagnosis and treatment, image super-resolution (SR) technology will be helpful to improve the ultrasonic imaging quality so as to enhance the accuracy of disease diagnosis. However, due to the differences of sensing devices or transmission media, the resolution degradation process of ultrasound imaging in real scenes is uncontrollable, especially when the blur kernel is usually unknown. This issue makes current end-to-end SR networks poor performance when applied to ultrasonic images. Aiming to achieve effective SR in real ultrasound medical scenes, in this work, we propose a blind deep SR method based on progressive residual learning and memory upgrade. Specifically, we estimate the accurate blur kernel from the spatial attention map block of low resolution (LR) ultrasound image through a multi-label classification network, then we construct three modules-up- sampling (US) module, residual learning (RL) model and memory upgrading (MU) model for ultrasound image blind SR. The US module is designed to upscale the input information and the up-sampled residual result will be used for SR reconstruction. The RL module is employed to approximate the original LR and continuously generate the updated residual and feed it to the next US module. The last MU module can store all progressively learned residuals, which offers increased interactions between the US and RL modules, augmenting the details recovery. Extensive experiments and evaluations on the benchmark CCA-US and US-CASE datasets demonstrate the proposed approach achieves better performance against the state-of-the-art methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.