Abstract
The feasibility of realistic autonomous space manipulation tasks using multisensory information is shown through two experiments involving a fluid interchange system and a module interchange system. In both cases, autonomous location of the mating element, autonomous location of the guiding light target, mating, and demating of the system are performed. Specifically, vision-driven techniques were implemented that determine the arbitrary two-dimensional position and orientation of the mating elements as well as the arbitrary three-dimensional position and orientation of the light targets. The robotic system is also equipped with a force/torque sensor that continuously monitors the six components of force and torque exerted on the end effector. Using vision, force, torque, proximity, and touch sensors, the fluid interchange system and the module interchange system experiments were accomplished autonomously and successfully. >
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have