Abstract
Autonomous navigation is a traditional research topic in intelligent robotics and vehicles, which requires a robot to perceive its environment through onboard sensors such as cameras or laser scanners, to enable it to drive to its goal. Most research to date has focused on the development of a large and smart brain to gain autonomous capability for robots. There are three fundamental questions to be answered by an autonomous mobile robot: 1) Where am I going? 2) Where am I? and 3) How do I get there? To answer these basic questions, a robot requires a massive spatial memory and considerable computational resources to accomplish perception, localization, path planning, and control. It is not yet possible to deliver the centralized intelligence required for our real-life applications, such as autonomous ground vehicles and wheelchairs in care centers. In fact, most autonomous robots try to mimic how humans navigate, interpreting images taken by cameras and then taking decisions accordingly. They may encounter the following difficulties.
Highlights
Autonomous navigation is a traditional research topic in intelligent robotics and vehicles, which requires a robot to perceive its environment through on-board sensors, such as cameras or laser scanners, to enable it to drive to its goal
This paper presents an efficient scheme for unifying routing, path planning, trajectory generation and motion control for distributed wireless sensors
The wireless visual sensors in the WiME were provided with unambiguous semantics for routing, control and image processing to support the navigation of “non-intelligent robots”
Summary
Abstract—Research on intelligent vehicle and mobile robot navigation has focused mostly on the development of a large and smart “brain” in order to gain autonomous capability copying homo sapiens. This paper reports an intelligent environment with a mosaic of wireless camera eyes to support navigation and the control of mobile robots. The mosaic of camera eyes distributes the massive on-board intelligence required for autonomous systems to the environment. A robot with less intelligence can exhibit sophisticated mobility. The solution reported here uses a multiple Bloom-filter for the efficient storage of routing information and an active contour based scheme for path planning, trajectory generation, and motion control. A prototype intelligent environment consisting of 30 wireless visual sensors was developed for indoor navigation. The integrated experiments demonstrated the mobility of an environment-controlled wheelchair
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have