Abstract

In this paper a scalable architecture with a computer vision subsystem as an integrated part to achieve a fast and robust navigation for almost autonomous mobile systems in dynamic environments is presented. The principal approach is not only using the robots' mobile sensor systems but also some fixed external vision sensors to build the required environment models. The measurements of the mobile and external sensors are fused to improve the quality of the input data. This sensor fusion is done by an active dynamic environment model that also provides an optimised data layer for different path planning systems. To achieve the required system's scalability a distributed approach is followed. Instead of using one big global environment model a distributed redundant environment model is employed to allow easier local path planning and to reduce bottlenecks in data transmission. Thus the path planning system is split to a local path planner and a global planning system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call