Abstract

Insect neural systems are a promising source of inspiration for new navigation algorithms, especially on low size, weight, and power platforms. There have been unprecedented recent neuroscience breakthroughs with Drosophila in behavioral and neural imaging experiments as well as the mapping of detailed connectivity of neural structures. General mechanisms for learning orientation in the central complex (CX) of Drosophila have been investigated previously; however, it is unclear how these underlying mechanisms extend to cases where there is translation through an environment (beyond only rotation), which is critical for navigation in robotic systems. Here, we develop a CX neural connectivity-constrained model that performs sensor fusion, as well as unsupervised learning of visual features for path integration; we demonstrate the viability of this circuit for use in robotic systems in simulated and physical environments. Furthermore, we propose a theoretical understanding of how distributed online unsupervised network weight modification can be leveraged for learning in a trajectory through an environment by minimizing orientation estimation error. Overall, our results may enable a new class of CX-derived low power robotic navigation algorithms and lead to testable predictions to inform future neuroscience experiments.

Highlights

  • Insect neural systems are a promising source of inspiration for new navigation algorithms, especially on low size, weight, and power platforms

  • Our model transforms angular velocity and visual features into a fused representation of orientation utilizing a total of 141 neurons distributed across five populations of neurons as specified in Fig. 1A, which are constrained by connectivity patterns of neuron types observed in Drosophila across glomeruli, and with plastic synaptic connections that enable the learning of visual landmarks

  • The five types of modeled neurons all synapse in either the protocerebral bridge (PB) or ellipsoid body (EB) regions of the CX and include: (1) Ring neurons which are receptive to visual inputs, (2) PB-EB-Noduli (P-EN) neurons which receive angular velocity inputs, (3) EB-PB-Gall neurons (E-PG) neurons, (4) PB-EB-Gall (P-EG) neurons, and (5) intrinsic neurons of the PB (PIntr), referred to as Δ7 neurons

Read more

Summary

Introduction

Insect neural systems are a promising source of inspiration for new navigation algorithms, especially on low size, weight, and power platforms. Many visual odometry approaches for state estimation ­exist[7], including high performing Visual-Inertial Odometry (VIO) systems for state estimation on rapidly moving p­ latforms[6]. These approaches require simplifying linearization ­assumptions[8], which can be addressed with more computationally intensive approaches such as particle ­filtering[9]. Promising approaches in deep learning for unsupervised training of visual odometry approaches include the intermediate calculation of depth from images, which can be refined in tandem with pose estimation by offline network optimization on a collected ­dataset[21]. In general training of the neural networks utilized in visual navigation are not performed during robotic system deployment due to computational requirements for training and the number of required training samples

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call