Abstract

A learning algorithm for a model binocular cell was derived according to an information maximization principle and by using a low signal-to-noise-ratio approximation. The algorithm updates cell's synaptic weights so that the information obtained from the cell's output is increased. According to the algorithm, model binocular cells were trained by using computer-generated stereo images as training data. As a result, cells tuned to various disparities were generated. Also, generated synaptic weight patterns of the cells were similar to Gabor-wavelets and receptive fields of simple cells in the visual cortex. Thus, they were orientation and spatial frequency selective as well as disparity selective. Gabor functions were used to fit the generated weight patterns. The fitting results indicated that the generated cells encode disparities in terms of phase disparity and/or position disparity. This result agrees with experimental findings by Anzai et al. [J Neurophys 82 (1999) 874] and is consistent with ICA-based theoretical results obtained [Network: Comput Neural Syst 11 (2000) 191].

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.