Abstract

Aggregation of self-reconfigurable robotic modules can potentially offer many advantages for robotic locomotion and manipulation. The resulting system could be more reliable and fault-tolerant and provide the necessary flexibility for new tasks and environments. However, self-aggregation of modules is a challenging task, especially when the alignment of the docking parties in a 3D environment involves both position and orientation (6D), since the bases of docking may be non-stationary (e.g., floating in space, underwater, or moving along the ground), and the end-effectors may have accumulated uncertainties due to many dynamically-established connections between modules. This paper presents a new framework for docking in such a context and describes a solution for sensor-guided self-reconfiguration and manipulation with non-fixed bases. The main contributions of the paper include a realistic experiment setting for 6D docking where a modular manipulator is floating or rotating in space with a reaction wheel and searches and docks with a target module using vision. The movement of the docking parties is a combination of floating and manipulation, and the precision of the docking is guided by a sensor located at the tip of the docking interface. The docking itself is planned and executed by a real-time algorithm with a theoretical convergence boundary. This new framework has been tested in a high-fidelity physics-based simulator, as well as by real robotic modules based on SuperBot. Experimental results have shown an average success rate of more than 86.7 percent in a variety of different 6D-docking scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call