Abstract

Spatio-temporal information processing is fundamental in both brain functions and AI applications. Current strategies for spatio-temporal pattern recognition usually involve explicit feature extraction followed by feature aggregation, which requires a large amount of labeled data. In the present study, motivated by the subcortical visual pathway and early stages of the auditory pathway for motion and sound processing, we propose a novel brain-inspired computational model for generic spatio-temporal pattern recognition. The model consists of two modules, a reservoir module and a decision-making module. The former projects complex spatio-temporal patterns into spatially separated neural representations via its recurrent dynamics, the latter reads out neural representations via integrating information over time, and the two modules are linked together using known examples. Using synthetic data, we demonstrate that the model can extract the frequency and order information of temporal inputs. We apply the model to reproduce the looming pattern discrimination behavior as observed in experiments successfully. Furthermore, we apply the model to the gait recognition task, and demonstrate that our model accomplishes the recognition in an event-based manner and outperforms deep learning counterparts when training data is limited.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.