Abstract
Automation via robotic systems is becoming widely adopted across many industries, but intelligent autonomy in dynamic environments is challenging to implement due to the difficulty of 3D vision. This paper proposes a novel method that utilizes in-situ 2D image processing to simplify 3D segmentation for robotic workspace detection in industrial applications. Using a TOF sensor mounted on a robotic arm, depth images of the workspace are collected. The algorithm identifies the contour of a table, filters extraneous data points, and converts only relevant data to a 3D pointcloud. This pointcloud is processed to identify the precise location of the workspace with regard to the robot. This method has been shown to be 10% more accurate and over 10,000% faster than a human analyzing the data in a GUI-based software using an octree region-based segmentation algorithm and provides consistent results, only limited by the resolution of the camera itself.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have