Abstract
Hierarchical generative models, such as Bayesian networks, and belief propagation have been shown to provide a theoretical framework that can account for perceptual processes, including feedforward recognition and feedback modulation. The framework explains both psychophysical and physiological experimental data and maps well onto the hierarchical distributed cortical anatomy. However, the complexity required to model cortical processes makes inference, even using approximate methods, very computationally expensive. Thus, existing object perception models based on this approach are typically limited to tree-structured networks with no loops, use small toy examples or fail to account for certain perceptual aspects such as invariance to transformations or feedback reconstruction. In this study we develop a Bayesian network with an architecture similar to that of HMAX, a biologically-inspired hierarchical model of object recognition, and use loopy belief propagation to approximate the model operations (selectivity and invariance). Crucially, the resulting Bayesian network extends the functionality of HMAX by including top-down recursive feedback. Thus, the proposed model not only achieves successful feedforward recognition invariant to noise, occlusions, and changes in position and size, but is also able to reproduce modulatory effects such as illusory contour completion and attention. Our novel and rigorous methodology covers key aspects such as learning using a layerwise greedy algorithm, combining feedback information from multiple parents and reducing the number of operations required. Overall, this work extends an established model of object recognition to include high-level feedback modulation, based on state-of-the-art probabilistic approaches. The methodology employed, consistent with evidence from the visual cortex, can be potentially generalized to build models of hierarchical perceptual organization that include top-down and bottom-up interactions, for example, in other sensory modalities.
Highlights
The Bayesian Brain Hypothesis Experimental evidence shows that feedback originating in higher-level areas, such as V4, inferotemporal (IT) cortex, lateral occipital complex (LOC) or middle temporal (MT) cortex with bigger and more complex receptive fields, can modify and shape V1 responses, accounting for contextual or extra-classical receptive field effects [1,2,3].While there is relative agreement that feedback connections play a role in integrating global and local information from different cortical regions to generate an integrated percept [4,5], several differing approaches have attempted to explain the underlying mechanisms
The reason for using just one training image per category, sometimes denoted as one shot learning, is the fact that the model employs weight sharing. This method simulates the temporal variation of the input that would naturally occur by using dynamic input or by including a mechanism to account for eye saccades, so effectively it is as if the network had been trained with images at all possible locations
Inference in Convolutional Deep Belief Networks (CDBNs) is implemented using Gibbs sampling whereas we employ loopy belief propagation together with a number of approximations to simplify the computations
Summary
The Bayesian Brain Hypothesis Experimental evidence shows that feedback originating in higher-level areas, such as V4, inferotemporal (IT) cortex, lateral occipital complex (LOC) or middle temporal (MT) cortex with bigger and more complex receptive fields, can modify and shape V1 responses, accounting for contextual or extra-classical receptive field effects [1,2,3].While there is relative agreement that feedback connections play a role in integrating global and local information from different cortical regions to generate an integrated percept [4,5], several differing approaches have attempted to explain the underlying mechanisms. Overall, increasing evidence supports the proposal that Bayesian inference provides a theoretical framework that maps well onto cortical connectivity, explains both psychophysical and neurophysiological results, and can be used to build biologically plausible models of brain function [6,10,11,12]. Within this framework, Bayesian networks and belief propagation provide a rigorous mathematical foundation for these principles. Belief propagation has been found to be well-suited to neural implementation, due to its hierarchical distributed organization and homogeneous internal structure and operations [5,13,14,15,16]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.