Abstract
The peg-in-hole task with object feature uncertain is a typical case of robotic operation in the real-world unstructured environment. It is nontrivial to realize object perception and operational decisions autonomously, under the usual visual occlusion and real-time constraints of such tasks. In this paper, a Bayesian networks-based strategy is presented in order to seamlessly combine multiple heterogeneous senses data like humans. In the proposed strategy, an interactive exploration method implemented by hybrid Monte Carlo sampling algorithms and particle filtering is designed to identify the features’ estimated starting value, and the memory adjustment method and the inertial thinking method are introduced to correct the target position and shape features of the object respectively. Based on the Dempster–Shafer evidence theory (D-S theory), a fusion decision strategy is designed using probabilistic models of forces and positions, which guided the robot motion after each acquisition of the estimated features of the object. It also enables the robot to judge whether the desired operation target is achieved or the feature estimate needs to be updated. Meanwhile, the pliability model is introduced into repeatedly perform exploration, planning and execution steps to reduce interaction forces, the number of exploration. The effectiveness of the strategy is validated in simulations and in a physical robot task.
Highlights
Several recent studies have demonstrated that robotic operations may no longer be targeted at specific objects and structured tasks
We focused on the issue of robotic autonomy operations in the real-world unstructured environment
An interactive exploration method using Bayesian networks was proposed to integrate multimodal information and accurately estimate the features of the uncertain object, which can comprehensively perceive the features of the uncertain object even in the presence of visual occlusion
Summary
Several recent studies have demonstrated that robotic operations may no longer be targeted at specific objects and structured tasks. The interactive exploration method (IE) is first presented to obtain the features of the uncertain object without a priori knowledge It integrates the Bayesian posterior probabilities obtained from multimodal information (visual, contact force and position) to features estimated starting value, i.e., the target position and shape of the uncertain object. Unlike existed research works that only used position accuracy as an indicator, the designed fusion decision strategy fuses both position and force evidence information in the D-S theoretical framework and includes their uncertainties to guide the subsequent operations Command information, such as the features of the uncertain object and the evaluation results of the operation, from the MSP is sent to the robot to guide the operation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.