Abstract

This paper discusses the use of multiple vision sensors and a proximity sensor to obtain three-dimensional occupancy profile of robotic workspace, identify key features, and obtain a 3-D model of the objects in the work space. The present research makes use of three identical vision sensors. Two of these sensors are mounted on a stereo rig on the sidewall of the robotic workcell. The third vision sensor is located above the workcell. The vision sensors on the stereo rig provide information about three-dimensional position of any point in the robotic workspace. The camera to robot calibration for these vision sensors in stereo configuration has been obtained with the help of a three-layered feedforward neural network. Squared Sum of Difference (SSD) algorithm has been used to obtain the stereo matching. Similarly, camera to robot transformation for the camera located above the work cell has been obtained with the help of a three-layered feedforward neural network. Three-dimensional positional information from vision sensors on stereo rig and two-dimensional positional information from a camera located above the workcell and a proximity sensor mounted on the robot wrist have been fused with the help of Bayesian technique to obtain more accurate positional information about locations in workspace.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call