Abstract

Abstract In this paper, we present a generalized, holistic method for automated robotic arm handling of manufactured components in an industrial setting using computer vision. In particular, we address scenarios in which a high volume of manufactured parts are moving along a conveyor belt at random locations and orientations with multiple robotic arms available for manipulation. We also present specific, tested solutions to all stages of the framework as well as some alternative methods based on the literature review. The framework consists of three stages: (1) visual data capture, (2) data interpretation, and (3) command generation and output to robotic arms. In the visual data capture stage, a multi-component computer vision system takes in a live camera feed and exports it to an external processor. In the data interpretation stage, this video feed is interpreted using tools like 3D point clouds and object detection/tracking models to provide useful information such as object number, location, velocity, and orientation. Lastly, the command generation and output to the robotic arms stage takes the information acquired from the analysis in the data interpretation stage and turns it into instructions for robot control. While a full-scale, cohesive system has yet to be tested, our solutions to each stage show the feasibility of implementing such a system in an industrial setting.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call