Abstract
After a short tutorial on the fundamentals of Bayes approaches and Bayesian Ying-Yang (BYY) harmony learning, this paper introduces new progresses. A generic information harmonising dynamics of BYY harmony learning is proposed with the help of a Lagrange variety preservation principle, which provides Lagrange-like implementations of Ying-Yang alternative nonlocal search for various learning tasks and unifies attention, detection, problem-solving, adaptation, learning and model selection from an information harmonising perspective. In this framework, new algorithms are developed to implement Ying-Yang alternative nonlocal search for learning Gaussian mixture and several typical exemplars of linear matrix system, including factor analysis (FA), mixture of local FA, binary FA, nonGaussian FA, de-noised Gaussian mixture, sparse multivariate regression, temporal FA and temporal binary FA, as well as a generalised bilinear matrix system that covers not only these linear models but also manifold learning, gene regulatory networks and the generalised linear mixed model. These algorithms are featured with a favourable nature of automatic model selection and a unified formulation in performing unsupervised learning and semi-supervised learning. Also, we propose a principle of preserving multiple convex combinations, which leads alternative search algorithms. Finally, we provide a chronological outline of the history of BYY learning studies.
Highlights
Bayes approach and automatic model selection Learning in an intelligent system is featured by three levels of inverse problems, for which details are referred to Sect. 1 of (Xu 2010a,b)
A principle of preserving multiple convex combinations for implementing Bayesian Ying-Yang (BYY) harmony learning, which leads another type of Ying-Yang alternative nonlocal search algorithms
Most of the fundamentals and major implementing techniques of the BYY harmony learning are developed in the period of 1995 to 2001, for which we provide an outline chronologically in Table 3 featured by the time points at which the major innovative studies started at
Summary
Bayes approach and automatic model selection Learning in an intelligent system is featured by three levels of inverse problems, for which details are referred to Sect. 1 of (Xu 2010a,b). Bayes approach and automatic model selection Learning in an intelligent system is featured by three levels of inverse problems, for which details are referred to Sect. Learning tasks associated with the front level can be viewed from a perspective of learning a mapping x → y, called representative model, by which an observed sample x in a visible domain X is mapped into its corresponding encoding y as a signal or inner code to perform a task of problem solving, such as abstraction, classification, inference and control. (1) One is featured by learning a mapping x → y according to whether a principle is satisfied by the resulted inner encodings of y, while not explicitly taking the other directional mapping y → x in consideration. The other widely studied family is supervised learning by a linear or a nonlinear mapping that makes samples of y to approach the desired target samples
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.