Abstract

PurposeDuring needle interventions, successful automated detection of the needle immediately after insertion is necessary to allow the physician identify and correct any misalignment of the needle and the target at early stages, which reduces needle passes and improves health outcomes.MethodsWe present a novel approach to localize partially inserted needles in 3D ultrasound volume with high precision using convolutional neural networks. We propose two methods based on patch classification and semantic segmentation of the needle from orthogonal 2D cross-sections extracted from the volume. For patch classification, each voxel is classified from locally extracted raw data of three orthogonal planes centered on it. We propose a bootstrap resampling approach to enhance the training in our highly imbalanced data. For semantic segmentation, parts of a needle are detected in cross-sections perpendicular to the lateral and elevational axes. We propose to exploit the structural information in the data with a novel thick-slice processing approach for efficient modeling of the context.ResultsOur introduced methods successfully detect 17 and 22 G needles with a single trained network, showing a robust generalized approach. Extensive ex-vivo evaluations on datasets of chicken breast and porcine leg show 80 and 84% F1-scores, respectively. Furthermore, very short needles are detected with tip localization errors of less than 0.7 mm for lengths of only 5 and 10 mm at 0.2 and 0.36 mm voxel sizes, respectively.ConclusionOur method is able to accurately detect even very short needles, ensuring that the needle and its tip are maximally visible in the visualized plane during the entire intervention, thereby eliminating the need for advanced bi-manual coordination of the needle and transducer.

Highlights

  • Ultrasound (US) imaging is broadly used to visualize and guide the interventions that involve percutaneous advancing of a needle to a target inside the patients’ body

  • A major benefit of dense segmentation using ShareFCN is related to the data of the lower-frequency range X6-1 phased-array transducer

  • More computationally efficient networks such as our proposed ShareFCN are preferred over patch classification methods

Read more

Summary

Introduction

Ultrasound (US) imaging is broadly used to visualize and guide the interventions that involve percutaneous advancing of a needle to a target inside the patients’ body. 3D US transducers with an image-based needle-tracking system can overcome these limitations and minimize the manual coordination, while preserving the use of a conventional needle, signal generation and transducers [12]. In such a system, the needle is conveniently placed in the larger 3D US field of view and the processing unit automatically localizes and visualizes the entire needle. The manual skills are significantly simplified when the entire needle is visible in the visualized plane, after the needle is advanced or the transducer is moved

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call