Abstract

A vision-based guidance methodology is proposed for precise positioning of the tool center point (TCP) of heavy-duty, long-reach (HDLR) manipulators. HDLR manipulators are non-rigid structures with many nonlinearities. Therefore, conventional rigid-body–based modeling and control methods issue challenges for accurate TCP positioning. To compensate for these errors, we compute the pose error between the TCP and an object of interest (OOI) directly in the camera frame, while using motion-based local calibration to find the extrinsic sensor-to-robot correspondence. The proposed pipeline for local calibration is twofold: first, the detected tool is oriented perpendicularly with respect to the OOI. Second, range adjustment is performed in the local planar plane by exploiting the visual measurements. Two methods for adjusting the range were examined: a line equation–based method and a trajectory matching–based method. Real-time experiments were conducted using a HDLR manipulator with a 5 m reach, and visual fiducial markers were used as detectable objects for the visual sensor. The experimental results demonstrated that the proposed methodology can provide sub-centimeter positioning accuracy, which is very challenging to achieve with HDLR manipulators due to their characteristic uncertainties.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.