Abstract

Mammals rely on vision and self-motion information in nature to distinguish directions and navigate accurately and stably. Inspired by the mammalian brain neurons to represent the spatial environment, the brain-inspired positioning method based on multi-sensors’ input is proposed to solve the problem of accurate navigation in the absence of satellite signals. In the research related to the application of brain-inspired engineering, it is not common to fuse various sensor information to improve positioning accuracy and decode navigation parameters from the encoded information of the brain-inspired model. Therefore, this paper establishes the head-direction cell model and the place cell model with application potential based on continuous attractor neural networks (CANNs) to encode visual and inertial input information, and then decodes the direction and position according to the population neuron firing response. The experimental results confirm that the brain-inspired navigation model integrates a variety of information, outputs more accurate and stable navigation parameters, and generates motion paths. The proposed model promotes the effective development of brain-inspired navigation research.

Highlights

  • Unmanned mobile platforms have a wide range of applications in many industries

  • Location information estimation methods are usually based on probability models, such as extended Kalman filter (EKF) [3], unscented Kalman filter (UKF) [4], and particle filter (PF) [5]

  • It is not difficult to see from the curve results that both EKF and our proposed model could be more effective in integrating inertial measurement units (IMU) and visual odometry data to achieve the purpose of improving positioning accuracy

Read more

Summary

Introduction

Unmanned mobile platforms (such as robots, unmanned vehicles, and unmanned aerial vehicles) have a wide range of applications in many industries. Yu et al proposed a brain-inspired 4DoF (degrees of freedom) SLAM system named NeuroSLAM based upon computational models of 3D head-direction cells and 3D grid cells, with visual odometry that provides self-motion cues [17]. Most of the perceptual information input of brain-inspired navigation and positioning methods comes from a single sensor, such as the visual sensor, and lacks the research on decoding navigation parameters from the perceptual information of multiple sensors. We propose an effective and robust positioning method that combines inertial and visual sensor data for brain-inspired navigation. We develop a brain-inspired inertial/visual navigation model for positioning in satellite-jamming environments. We propose a head-direction cell model and place cell encoding model based on continuous attractor neural networks to fuse inertial and visual information. The sensory input module consists of an inertial measurement unit (IMU) and a camera, and the inertial measurement unit is composed of Sensors 2021, 21, 7988

Brain-Inspired Navigation Model Composition
Head-Direction Cells’ Encoding
Place Cells’ Encoding
Simulation Data Experiment
Real-World Data Experiment
Discussion
Model Parameter Adjustment
Findings
Other Dataset Experiments

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.