Abstract

Neural learning plays an important role in many applications. In this paper, we derive a new learning paradigm for neural networks. Most existing neural models train network parameters including connecting weights and biases via optimizing a loss or energy function. Inspired from the associative learning in brain, we propose to associate different patterns via modeling joint distribution of them based on a hierarchical architecture. We first define an energy function based on the distance between hierarchical features of different patterns. Then a Gibbs typological distribution is constructed according to the energy field. To optimize the model, it is necessary to estimate the gradient expectation via sampling. Different from the simple architecture of existing probabilistic neural models such as restricted Boltzmann machine, the difficulty of optimizing this model lies in uncomputable probability for sampling from the distribution. Then we propose an optimization based sampling method. After learning, conditional probability can be derived and the unknown pattern can be generated via sampling as well. Compared with existing neural learning models, the proposed deep associative learning can associate different patterns and can be directly applied to many learning problems. Experiments on problems of classification, image transformation, and image change detection verify the effectiveness of the proposed learning paradigm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.