Abstract

A fundamental task in robotic assembly is the pick and place operation. Generally, this operation consists of three subtasks; guiding the robot to the target and positioning the manipulator in an appropriate pose, picking up the object and moving the object to a new location. In situations where the pose of the target may vary in the workspace, sensory feedback becomes indispensable to guide the robot to the object. Ideally, local image features must be clearly visible and un-occluded in multiple views of the object. In reality, this may not be always the case. Local image features are often are often rigidly constrained to a particular target and may require specialized feature localization algorithms. We present a visual positioning system that addresses feature extraction issues for a class of objects that have smooth or curved surfaces. In this work, the visual sensor consists of an arm mounted camera and a grid pattern projector that produces images with local surface description of the target. The projected pattern is always visible in the image and it is sensitive to variations in the object’s pose. A set of low-order geometric moments globally characterizes the observed pattern, eliminating the need for feature localization and overcoming the point correspondence problem. A neural network then learns the complex relationship between the robot’s pose displacements and the observed variations in the image features. After training, visual feedback guides the robot to the target from any arbitrary location in the workspace. Its applicability using a five degrees of freedom (DOF) industrial robot is demonstrated.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call