Abstract

Ultrasound (US) probes scan over the surface of the human body to acquire US images in clinical vascular US diagnosis. However, due to the deformation and specificity of different human surfaces, the relationship between the scan trajectory of the skin and the internal tissues is not fully correlated, which poses a challenge for autonomous robotic US imaging in a dynamic and external-vision-free environment. Here, we propose a decoupled control strategy for autonomous robotic vascular US imaging in an environment without external vision. The proposed system is divided into outer-loop posture control and inner-loop orientation control, which are separately determined by a deep learning (DL) agent and a reinforcement learning (RL) agent. First, we use a weakly supervised US vessel segmentation network to estimate the probe orientation. In the outer loop control, we use a force-guided reinforcement learning agent to maintain a specific angle between the US probe and the skin in the dynamic imaging processes. Finally, the orientation and the posture are integrated to complete the imaging process. Evaluation experiments on several volunteers showed that our RUS could autonomously perform vascular imaging in arms with different stiffness, curvature, and size without additional system adjustments. Furthermore, our system achieved reproducible imaging and reconstruction of dynamic targets without relying on vision-based surface information. Our system and control strategy provides a novel framework for the application of US robots in complex and external-vision-free environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call