Abstract

Robotic arms have increasingly been used in applications such as manufacturing and medical. Often, physically impaired individuals have difficulty completing tasks such as picking an object off a shelf or picking items from the refrigerator. They rely on caregivers and others to help them complete tasks. Therefore to address this issue, research is ongoing into how to improve the lives of such persons using robotic arms and other technologies. This work builds on existing research which utilizes object recognition and grasp detection components to identify a bottle and obtain its real world coordinates but did not fully integrate the solution with a robotic arm [1]. We fully integrate the object recognition and grasp detection components with a Dobot Magician robotic arm. Using an eye-to-hand translation approach, we determine the translation matrix using experimental results. We used an Intel RealSense D455 camera to generate images for object detection and grasp point detection. The grasp point coordinates are passed to the robotic arm which performs the translation before moving the arm to grasp the bottle. Our tests with the fully integrated robotic arm show that the solution is feasible and using the given translation and depth accuracy the robotic arm can pick a bottle placed randomly in a given area.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.