Abstract

In the last few years, the use of robot manipulators has attracted increasing attention in various industries. Accordingly. Researchers have proposed unique ideas for co-robot control using vision sensors. In this study, you only look once (YOLO) based on convolutional neural network (CNN), and grasping center point position error minimization algorithms were proposed to reduce object misrecognition and increase the performance for grasping an object. In addition, a gripping algorithm was designed for six degree of freedom (DOF) robot manipulators. In addition, machine vision algorithms, including a Grayscale, Gaussian filter, Canny edge, and Contouring, were implemented to detect objects features, such as centroids and orientation. Furthermore, the coordinate system of the vision sensor was converted into a coordinate system of the robot manipulator using a transformation matrix to accurately move the end effector of the robot arm to the center point of the object. The logic implemented in this study not only detected the trained object on the workstation, but also minimized the positional error of the transformation matrix. Additionally, experiments were performed on the 6-DOF robot manipulators. The results revealed that the end effector of the 6-DOF manipulators successfully moved to the center of the detected object, and each of the eight objects was normally gripped.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.