Abstract

In a small operational space, e.g., mesoscale or microscale, we need to control movements carefully because of fragile objects. This article proposes a novel structure based on spiking neural networks to imitate the joint function of multiple brain regions in visual guiding in the small operational space and offers two channels to achieve collision-free movements. For the state sensation, we simulate the primary visual cortex to directly extract features from multiple input images and the high-level visual cortex to obtain the object distance, which is indirectly measurable, in the Cartesian coordinates. Our approach emulates the prefrontal cortex from two aspects: multiple liquid state machines to predict distances of the next several steps based on the preceding trajectory and a block-based excitation-inhibition feedforward network to plan movements considering the target and prediction. Responding to "too close" states needs rich temporal information, and we leverage a cerebellar network for the subconscious reaction. From the viewpoint of the inner pathway, they also form two channels. One channel starts from state extraction to attraction movement planning, both in the camera coordinates, behaving visual-servo control. The other is the collision-avoidance channel, which calculates distances, predicts trajectories, and reacts to the repulsion, all in the Cartesian coordinates. We provide appropriate supervised signals for coarse training and apply reinforcement learning to modify synapses in accordance with reality. Simulation and experiment results validate the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call