Abstract

Assembly robots that use an active camera system for visual feedback can achieve greater flexibility, including the ability to operate in an uncertain and changing environment. Incorporating active vision into a robot control loop involves some inherent difficulties, including calibration, and the need for redefining the servoing goal as the camera configuration changes. In this paper, we propose a novel self-organizing neural network that learns a calibration-free spatial representation of 3D point targets in a manner that is invariant to changing camera configurations. This representation is used to develop a new framework for robot control with active vision. The salient feature of this framework is that it decouples active camera control from robot control. The feasibility of this approach is established with the help of computer simulations and experiments with the University of Illinois Active Vision System (UIAVS).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call