Abstract

Exploration and navigation (human or robot) takes place at two distinct scales of space. Small-scale space is within the sensory horizon of the agent, where the agent can reliably localize itself and can build a metrically accurate map within a local frame of reference. Large-scale space is space whose structure is larger than the sensory horizon of the agent. This is the space of the cognitive map, which must be learned by merging information gathered during exploration.Inspired by the properties of the human cognitive map, the Spatial Semantic Hierarchy (SSH) shows how several different ontologies can be used together to represent knowledge of large-scale and small-scale space. The basic SSH uses hill-climbing and trajectory-following control laws to explore the environment even with very limited prior knowledge of sensor semantics, but its knowledge of local space is quite limited. The Hybrid SSH (HSSH) exploits prior knowledge of the sensors to build local metrical maps of small-scale space. These can be abstracted to capture the qualitative decision structure of local space, making it possible to build a global topological map, which can be used as a skeleton for building a global metrical map when resources permit. Factoring spatial knowledge in this way avoids problems afflicting other approaches to exploration and mapping.Computer vision provides far richer sensor data than the range sensors that were used in the original development of the SSH and HSSH. We have been developing vision-based methods for detecting hazards and identifying the local structure of the environment. The other levels of representation are largely unchanged. The multiple ontologies in the HSSH naturally support robust representation and learning of spatial knowledge, as well as multiple levels of human-robot interaction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call