Abstract

A robot sensory system developed for industrial robotics is described. Television frames and inputs from other sensors are interpreted by a hierarchically organized group of microprocessors. The system uses knowledge of object prototypes, and of robot action, to generate visual expectancies for each frame. At each level of the hierarchy, interpretative processes are guided by expectancy-generating modeling processes. The modeling processes are driven by a priori knowledge, by knowledge of the robot's movements, and by feedback from the interpretative processes. At the lowest level, other senses (proximity, tactile, force) are handled separately; above this level, they are integrated with vision into a multi-modal world model. At successively higher levels, the interpretative and modeling processes describe the world with successively higher order constructs, and over longer time periods. All levels of the hierarchy provide output, in parallel, to guide corresponding levels of a hierarchical robot control system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call