Abstract

AbstractWe develop a hierarchical generative model to learn features of simple and complex cells in the primary visual cortex via multiplicative interactions and address the problem of learning invariant visual representation from natural image sequences. Generally, the overall objective of a generative sparse coding model (Olshausen and Field, Nature, 1996) is to reconstruct the input *x* via a linear combination of feature bases *A = [A~1~,A~2~,...,A~M~]*, such that *x = As*, where s is the latent representation with a sparse constraint.In the proposed model, we added another hidden layer with complex cells to modulate the latent representation *s*, such that *s* is generated via an element-wise product of bottom-up representation *y = A^T^x* and top-down representation *z = Bc*, i.e., *s = y.*z* . Here *c* represents the top layer with complex cells, and *B* is the feature matrix connecting from the simple cell layer to the complex cell layer. Similar to the well known bilinear models (Tenenbaum and Freeman, Neural Computation, 2000) for invariant representation, the complex cell layer here represents the content variables and the simple cell layer represents the style variables. However, instead of using three-order tensor weight parameters as in the bilinear models, our model provides a factorized approach by the product of two two-order tensor weight parameters *A* and *B*, and thus makes learning simpler and more efficient. Moreover, while bilinear models dealt with two latent variables, our model contains only one latent variable *c* for invariant representation.The penalties of sparseness and slowness priors are integrated into this model as well. In addition, non-negativity constraints of the latent variable c and weight matrices *A* and *B* are enforced to fit the known neurophysiology better. We utilize the gradient descent algorithm to train our model from natural image sequences. A dictionary of Gabor-like orientation selective features is developed in the simple cell layer, where topography of receptive fields is automatically generated. In this way, similar simple cell receptive fields are pooled together to produce locally-invariant representation in complex cells.

Highlights

  • A computational model to learn the feature bases of simple and complex cells in the primary visual cortex

  • Address the translation invariance developed from c cells via natural image sequences

  • Total energy function minE : E = E0 +Esp +Ecr

Read more

Summary

Introduction

A computational model to learn the feature bases (receptive fields) of simple and complex cells in the primary visual cortex. Nature Precedings : doi:10.1038/npre.2011.5943.1 : Posted 8 May 2011 Address the translation invariance developed from c cells via natural image sequences.

Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.