Abstract

Humans have an innate ability of environment modeling, perception, and planning while simultaneously performing tasks. However, it is still a challenging problem in the study of robotic cognition. We address this issue by proposing a neuro-inspired cognitive navigation framework, which is composed of three major components: semantic modeling framework (SMF), semantic information processing (SIP) module, and semantic autonomous navigation (SAN) module to enable the robot to perform cognitive tasks. The SMF creates an environment database using Triplet Ontological Semantic Model (TOSM) and builds semantic models of the environment. The environment maps from these semantic models are generated in an on-demand database and downloaded in SIP and SAN modules when required to by the robot. The SIP module contains active environment perception components for recognition and localization. It also feeds relevant perception information to behavior planner for safely performing the task. The SAN module uses a behavior planner that is connected with a knowledge base and behavior database for querying during action planning and execution. The main contributions of our work are the development of the TOSM, integration of SMF, SIP, and SAN modules in one single framework, and interaction between these components based on the findings of cognitive science. We deploy our cognitive navigation framework on a mobile robot platform, considering implicit and explicit constraints for autonomous robot navigation in a real-world environment. The robotic experiments demonstrate the validity of our proposed framework.

Highlights

  • Environment modeling, recognition, and planning are fundamental components for intelligent systems, such as mobile robots, that enable them to understand and perceive the complex environment in the same way a human does and perform the task reliably

  • Our Semantic Information Processing (SIP) module endows the robot with cognitive vision capability, inspired by the human ability, that tends to recognize the places by the objects present there, such as “computer lab” or “printer room,” which are common terms that are used by human to recognize the places based on their objects

  • When the mobile robot navigates in a real-world environment, it sends the sensory data to semantic information processing (SIP), which passes the pre-processed data to the recognition model that is stored in working memory and gets the periodic visual updates as robot navigates

Read more

Summary

Introduction

Environment modeling, recognition, and planning are fundamental components for intelligent systems, such as mobile robots, that enable them to understand and perceive the complex environment in the same way a human does and perform the task reliably. The layered structure in the CNN model is more reliable to represent the lower area of the human visual cortex [9] Motivated by these developments, we combine the strengths of a deep learning-based CNN model with an on-demand database for semantic object-based place recognition and robot localization. Aiming to solve POMDPs on a variety of domains, reinforcement learning approaches try to use a large amount of data to approximate a policy capable of obtaining an optimal action based solely on the current state. Our framework uses the on-demand database to ensure real-time capabilities It integrates semantic modeling, information processing, and autonomous navigation modules to perform a task.

Related Work
Knowledge-Based Navigation Frameworks
Map Representation
Ontology-Based Knowledge Model
Object Recognition
Place Recognition
Localization
Planning
Mission Planning
Behavior Planning
Motion Planning
Deep Reinforcement Learning
TOSM-Based Autonomous Navigation Framework
Semantic Modeling Framework
TOSM-Based Environment Modeling
On-Demand Database
Semantic Information Processing
Semantic Autonomous Navigation
Task Planner
Behavior Planner
Action Planner
Mental Simulation
Reinforcement Learning
Experimental Environments
Experimental Sequences
Method
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.