Abstract

Segmentation of needles in ultrasound images remains a challenging problem. In this paper, we introduce a machine learning-based method for needle segmentation in 2D beam-steered ultrasound images. We used a statistical boosting approach to train a pixel-wise classifier for needle segmentation. The Radon transform was then used to find the needle position and orientation from the segmented image. We validated our method with data from ex vivo specimens and clinical nerve block procedures, and compared the results to those obtained using previously reported needle segmentation methods. Results show improved localization success and accuracy using the proposed method. For the ex vivo datasets, assuming that the needle orientation was known a priori, the needle was successfully localized in 86.2% of the images, with a mean targeting error of 0.48mm. The robustness of the proposed method to a lack of a priori knowledge of needle orientation was also demonstrated. For the clinical datasets, assuming that the needle orientation was closely aligned with the beam steering angle selected by the physician, the needle was successfully localized in 99.8% of the images, with a mean targeting error 0.19mm. These results indicate that the learning-based segmentation method may allow for increased targeting accuracy and enhanced visualization during ultrasound-guided needle procedures.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.