Abstract

The purpose of this study was to detect two dimensional and sub-pixel displacement with high spatial resolution using an ultrasonic diagnostic apparatus. Conventional displacement detection methods assume neighborhood uniformity and cannot achieve both high spatial resolution and sub-pixel displacement detection. A deep-learning network that utilizes ultrasound images and output displacement distribution was developed. The network structure was constructed by modifying FlowNet2, a widely used network for optical flow estimation, and a training dataset was developed using ultrasound image simulation. Detection accuracy and spatial resolution were evaluated via simulated ultrasound images, and the clinical usefulness was evaluated with ultrasound images of the liver exposed to high-intensity-focused ultrasound (HIFU). These results were compared to the Lucas-Kanade method, a conventional sub-pixel displacement detection method. For a displacement within ± 40µm (± 0.6 pixels), a pixel size of 67µm, and signal noise of 1%, the accuracy was above 0.5µm and 0.2µm, the precision was above 0.4µm and 0.3µm, and the spatial resolution was 1.1mm and 0.8mm for the lateral and axial displacements, respectively. These improvements were also observed in the experimental data. Visualization of the lateral displacement distribution, which determines the edge of the treated lesion using HIFU, was also realized. Two-dimensional and sub-pixel displacement detection with high spatial resolution was realized using a deep-learning methodology. The proposed method enabled the monitoring of small and local tissue deformations induced by HIFU exposure.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.