Abstract

Image processing has significantly extended the practical value of the eye-in-hand camera, enabling and promoting its applications for quantitative measurement. However, fully vision-based pose estimation methods sometimes encounter difficulties in handling cases with deficient features. In this article, we fuse visual information with the sparse strain data collected from a single-core fiber inscribed with fiber Bragg gratings (FBGs) to facilitate continuum robot pose estimation. An improved extreme learning machine algorithm with selective training data updates is implemented to establish and refine the FBG-empowered (F-emp) pose estimator <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">online</i> . The integration of F-emp pose estimation can improve sensing robustness by reducing the number of times that visual tracking is lost given moving visual obstacles and varying lighting. In particular, this integration solves pose estimation failures under full occlusion of the tracked features or complete darkness. Utilizing the fused pose feedback, a hybrid controller incorporating kinematics and data-driven algorithms is proposed to accomplish fast convergence with high accuracy. The online-learning error compensator can improve the target tracking performance with a 52.3%–90.1% error reduction compared with constant-curvature model-based control, without requiring fine model-parameter tuning and prior data acquisition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call