In pursuing artificial intelligence for efficient collision avoidance in robots, researchers draw inspiration from the locust’s visual looming-sensitive neural circuit to establish an efficient neural network for collision detection. However, existing bio-inspired collision detection neural networks encounter challenges posed by jitter streaming, a phenomenon commonly experienced, for example, when a ground robot moves across uneven terrain. Visual inputs from jitter streaming induce significant fluctuations in grey values, distracting existing bio-inspired networks from extracting visually looming features. To overcome this limitation, we derive inspiration from the potential of feedback loops to enable the brain to generate a coherent visual perception. We introduce a novel dynamic temporal variance feedback loop regulated by scalable functional into the traditional bio-inspired collision detection neural network. This feedback mechanism extracts dynamic temporal variance information from the output of higher-order neurons in the conventional network to assess the fluctuation level of local neural responses and regulate it by a scalable functional to differentiate variance induced by incoherent visual input. Then the regulated signal is reintegrated into the input through negative feedback loop to reduce the incoherence of the signal within the network. Numerical experiments substantiate the effectiveness of the proposed feedback loop in promoting collision detection against jitter streaming. This study extends the capabilities of bio-inspired collision detection neural networks to address jitter streaming challenges, offering a novel insight into the potential of feedback mechanisms in enhancing visual neural abilities.
Read full abstract