Abstract

In order to account for the rapidity of visual processing, we explore visual coding strategies using a one-pass feed-forward spiking neural network. We based our model on the work of Van Rullen and Thorpe Neural Comput. 13 (6) (2001) 1255, which constructs a retinal representation using an orthogonal wavelet transform. This strategy provides a spike code, thanks to a rank order coding scheme which offers an alternative to the classical spike frequency coding scheme. We extended this model to efficient representations in arbitrary linear generative models by implementing lateral interactions on top of this feed-forward model. This method uses a matching pursuit scheme—recursively detecting in the image the best match with the elements of a dictionary and then subtracting it—and which may similarly define a visual spike code. In particular, this transform could be used with large and arbitrary dictionaries, so that we may define an over-complete representation which may define an efficient sparse spike coding scheme in arbitrary multi-layered architectures. We show here extensions of this method of computing with spike events, introducing an adaptive scheme leading to the emergence of V1-like receptive fields and then a model of bottom-up saliency pursuit.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.