In recent years, various computational models have been proposed in neuroscience, but only a few have been utilized in robot visual navigation through engineering approaches. Besides, these engineering methods lack environmental adaptability, especially in the aspect of visual place recognition in weak or uncontrollable light fluctuations. To address this issue and enhance the performance of visual template matching and map loop closure detection in challenging lighting environments, this paper proposed a bionic visual navigation model that combines two neural network architectures, the Pulse Coupled Neural Network (PCNN) and Continuous Attractor Neural Network (CANN). In our navigation model, the visual features of the environments are encoded as temporal information by the spiking model and input into the local view cells for visual template matching. Additionally, we utilize the pose cells network, which incorporates the similarity between current and previous templates, along with odometry data, to encode spatial information and store it as an experience map. To validate the effectiveness of the proposed model, we conducted evaluations on datasets collected within our library, campus, and an open-source office dataset. The experimental results reveal that our algorithm increases the F1-score of template matching by approximately 10.5%\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$ \\% $$\\end{document}, 35.8%\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$ \\% $$\\end{document}, 61.7%\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$ \\% $$\\end{document}, and 1.9%\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$ \\% $$\\end{document} in each dataset, compared to the conventional RatSLAM method. Furthermore, our algorithm generates a more accurate map that closely correlates with the real-world trajectory without compromising on computation time. The results suggest that our bionic visual navigation model is reliable for both standard and extreme lighting conditions.
Read full abstract