Abstract
Expanding robot autonomy can deliver functional flexibility and enable fast deployment of robots in challenging and unstructured environments. In this direction, significant advances have been recently made in visual-perception driven autonomy, which is mainly due to the availability of rich sensory data-sets. However, current robots’ physical interaction autonomy levels still remain at a basic level. Towards providing a systematic approach to this problem, this paper presents a new context-aware and adaptive method that allows a robotic platform to interact with unknown environments. In particular, a multi-axes self-tuning impedance controller is introduced to regulate quasi-static parameters of the robot based on previous experience in interacting with similar environments and the real-time sensory data. The proposed method is also capable of differentiating internal and external disruptions, and responding to them accordingly and appropriately. An agricultural experiment with different deformable material is presented to validate robot interaction autonomy improvements, and the capability of the proposed methodology in detecting and responding to unexpected events (e.g., faults).
Highlights
To respond to the rapidly increasing demand for high levels of flexibility in manufacturing and service applications, recentElectronic supplementary material The online version of this article contains supplementary material, which is available to authorized users. these two directions have seen significant advancements over the past decade, the bridging action, i.e., associating perception to interaction in an autonomous way, still remains in a non-satisfactory level
The required theoretical and technological components to build such a framework are integrated into five main modules, that are illustrated in Fig. 2: (1) a Cartesian impedance controller whose parameters can be tuned online, (2) a multiaxes self-tuning impedance unit to tune the aforementioned parameters when an interaction with the environment is predicted, (3) a trajectory planner to calculate the spatial points to be reached by the controller, (4) a visual perception module that locates the materials’ positions in the robot workspace, and (5) a Finite State Machine (FSM) that, based on the data provided by (4), triggers unit (2) and (3), being responsible of detecting system faults
This paper presented a novel framework to enhance robot adaptability in unknown and unstructured environments
Summary
These two directions have seen significant advancements over the past decade, the bridging action, i.e., associating perception to interaction in an autonomous way, still remains in a non-satisfactory level. This fundamental shortcoming has limited the application of robots in out-of-the-cage application scenarios, making a framework to enhance their physical interaction autonomy a critical requirement. While performing complex manipulation tasks, accurate sensory measurements related to physical interactions (e.g., forces and torques) may not be possible through wearable sensory systems, that is why most learning by demonstration techniques function on a kinematic level
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.