Abstract
In robotic manipulation, object grasping is a basic yet challenging task. Dexterous grasping necessitates intelligent visual observation of the target objects by emphasizing the importance of spatial equivariance to learn the grasping policy. In this paper, two significant challenges associated with robotic grasping in both clutter and occlusion scenarios are addressed. The first challenge is the coordination of push and grasp actions, in which the robot may occasionally fail to disrupt the arrangement of the objects in a well-ordered object scenario. On the other hand, when employed in a randomly cluttered object scenario, the pushing behavior may be less efficient, as many objects are more likely to be pushed out of the workspace. The second challenge is the avoidance of occlusion that occurs when the camera itself is entirely or partially occluded during a grasping action. This paper proposes a multi-view change observation-based approach (MV-COBA) to overcome these two problems. The proposed approach is divided into two parts: 1) using multiple cameras to set up multiple views to address the occlusion issue; and 2) using visual change observation on the basis of the pixel depth difference to address the challenge of coordinating push and grasp actions. According to experimental simulation findings, the proposed approach achieved an average grasp success rate of 83.6%, 86.3%, and 97.8% in the cluttered, well-ordered object, and occlusion scenarios, respectively.
Highlights
Object grasping is an important step in a variety of robotic tasks, yet it poses a challenging task in robotic manipulation [1]
We propose adopting deep Q-learning, which is a method of deep reinforcement learning that combines fully connected networks (FCNs) with Q-learning in order to create
The outcomes of the baseline models besides multi-view change observation-based approach (MV-COBA) are shown in the training session in the form of graphs of the grasp success rate and action efficiency, which indicate how each baseline behaved throughout the training phase and how it learned quickly and efficiently
Summary
Object grasping is an important step in a variety of robotic tasks, yet it poses a challenging task in robotic manipulation [1]. This paper proposes a multi-view and change observation-based approach (MVCOBA) that uses two cameras to achieve multiple views of state changes in the workspace and goes on to coordinate grasp and push execution in an effective manner. This approach aims to prevent a lack of visual data due to having a single view and perform effective robotic grasping in various working scenarios. Using multiple views to maximize grasp efficiency in both cluttered and occluded environments; Establishing a robust change observation for coordinating the execution of primitive grasp and push actions through a fully self-supervised learning manner; Incorporating a multi-view and change observation-based approach to perform push and grasp actions in wide scenarios; The learning of MV-COBA is entirely self-supervised, and its performance is validated via simulation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.