Abstract

Word representations, usually derived from a large corpus and endowed with rich semantic information, have been widely applied to natural language tasks. Traditional deep language models, on the basis of dense word representations, requires large memory space and computing resource. The brain-inspired neuromorphic computing systems, with the advantages of better biological interpretability and less energy consumption, still have major difficulties in the representation of words in terms of neuronal activities, which has restricted their further application in more complicated downstream language tasks. Comprehensively exploring the diverse neuronal dynamics of both integration and resonance, we probe into three spiking neuron models to post-process the original dense word embeddings, and test the generated sparse temporal codes on several tasks concerning both word-level and sentence-level semantics. The experimental results show that our sparse binary word representations could perform on par with or even better than original word embeddings in capturing semantic information, while requiring less storage. Our methods provide a robust representation foundation of language in terms of neuronal activities, which could potentially be applied to future downstream natural language tasks under neuromorphic computing systems.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.