Abstract

Robust vision in dynamic environments using limited processing power is one of the main challenges in robot vision. This is especially true in the case of biped humanoids that use low-end computers. Techniques such as active vision, context-based vision, and multi-resolution are currently in use to deal with these highly demanding requirements. Thus, having as main motivation the development of robust and high performing robot vision systems, which can operate in dynamic environments, with limited computational resources, we propose a spatiotemporal context integration framework that improves the perceptual capabilities of a given robot vision system. Furthermore, we try to link the vision, tracking, and self-localization problems using a context filter to improve the performance of all these parts together more than to improve them separately. This framework computes: (i) an estimation of the poses of visible and nonvisible objects using Kalman filters; (ii) the spatial coherence of each current detection with all other simultaneous detections and with all tracked objects; and (iii) the spatial coherence of each tracked object with all current detections. Using a Bayesian approach, we calculate the a-posteriori probabilities for each detected and tracked object, which is used in a filtering stage. We choose as a first application of this framework, the detection of static objects in the RoboCup Standard Platform League domain, where Nao humanoid robots are employed. The proposed system is validated in simulations and using real video sequences. In noisy environments, the system is able to decrease largely the number of false detections and to improve effectively the self-localization of the robot.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.