Abstract
The pose estimation is the critical technology in industrial robot. Nowadays, many machine vision-based approaches have applied the technology and achieved excellent results. However, the rapid detection of the pose estimation in complex multiscene environments is still a challenge, due to the interference of multiangle light and multibackground. To address these issues, this article proposes a practical robotic grasping method by using the 6-D pose estimation with protective correction. In this method, the synthetic dataset by self-production is used to train the improved deep object pose estimation network and then use the standard perspective-n-point algorithm to estimate the 6-DoF pose for each object instance. Meanwhile, in order to prevent grasp collisions cause by misrecognition, we propose the corrected grasping pose algorithm for protective correction by measured translation and predicted translation. Finally, the proposed grasping method has an average grasping success rate of 83.3% for the three objects under normal light, and the network for single-image detection speed has been to 1.490 frames/s. The code is available at <uri>https://github.com/aimiplus/Practical_Robotic_Grasping_Method</uri>.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.