Abstract

This paper presents a novel method for a visual-haptic aid teleoperation system (VHATS). The human operator sends commands to the remote manipulator using a haptic device while observing the virtual environment at the local site. The virtual environment also generates aiding force, which helps the human operator feeling the real touching force and drive the remote manipulator to avoid obstacles. In our system, high-resolution point cloud data of the remote environment are collected by a Kinect sensor. Then three-dimensional (3-D) graphic models are reconstructed at the master site. The environment information is transmitted to the local site to create and update the virtual models after the time delay. The feedback force is divided into two parts, guiding force and virtual contact force. The guiding force is derived from the Artificial Potential Field Method (APFM) for obstacles avoiding. The virtual contact force is based on the parameters of geometric and dynamic models. An adaptive Window-based Sliding Least-Squares Method (AW-SLSM) is adopted to update the parameters of the dynamic models on-line. At last, the experimental platform is established, while a moving, obstacle avoiding, target picking task is carried out and verified in the presence of a round-trip communication delay of 2 s.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call