Abstract

Image post-processing is used in clinical-grade ultrasound scanners to improve image quality (e.g., reduce speckle noise and enhance contrast). These post-processing techniques vary across manufacturers and are generally kept proprietary, which presents a challenge for researchers looking to match current clinical-grade workflows. We introduce a deep learning framework, MimickNet, that transforms conventional delay-and-summed (DAS) beams into the approximate Dynamic Tissue Contrast Enhanced (DTCE™) post-processed images found on Siemens clinical-grade scanners. Training MimickNet only requires post-processed image samples from a scanner of interest without the need for explicit pairing to DAS data. This flexibility allows MimickNet to hypothetically approximate any manufacturer's post-processing without access to the pre-processed data. MimickNet post-processing achieves a 0.940 ± 0.018 structural similarity index measurement (SSIM) compared to clinical-grade post-processing on a 400 cine-loop test set, 0.937 ± 0.025 SSIM on a prospectively acquired dataset, and 0.928 ± 0.003 SSIM on an out-of-distribution cardiac cine-loop after gain adjustment. To our knowledge, this is the first work to establish deep learning models that closely approximate ultrasound post-processing found in current medical practice. MimickNet serves as a clinical post-processing baseline for future works in ultrasound image formation to compare against. Additionally, it can be used as a pretrained model for fine-tuning towards different post-processing techniques. To this end, we have made the MimickNet software, phantom data, and permitted in vivo data open-source at https://github.com/ouwen/MimickNet.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.