Abstract

After a short tutorial on the fundamentals of Bayes approaches and Bayesian Ying-Yang (BYY) harmony learning, this paper introduces new progresses. A generic information harmonising dynamics of BYY harmony learning is proposed with the help of a Lagrange variety preservation principle, which provides Lagrange-like implementations of Ying-Yang alternative nonlocal search for various learning tasks and unifies attention, detection, problem-solving, adaptation, learning and model selection from an information harmonising perspective. In this framework, new algorithms are developed to implement Ying-Yang alternative nonlocal search for learning Gaussian mixture and several typical exemplars of linear matrix system, including factor analysis (FA), mixture of local FA, binary FA, nonGaussian FA, de-noised Gaussian mixture, sparse multivariate regression, temporal FA and temporal binary FA, as well as a generalised bilinear matrix system that covers not only these linear models but also manifold learning, gene regulatory networks and the generalised linear mixed model. These algorithms are featured with a favourable nature of automatic model selection and a unified formulation in performing unsupervised learning and semi-supervised learning. Also, we propose a principle of preserving multiple convex combinations, which leads alternative search algorithms. Finally, we provide a chronological outline of the history of BYY learning studies.

Highlights

  • Bayes approach and automatic model selection Learning in an intelligent system is featured by three levels of inverse problems, for which details are referred to Sect. 1 of (Xu 2010a,b)

  • A principle of preserving multiple convex combinations for implementing Bayesian Ying-Yang (BYY) harmony learning, which leads another type of Ying-Yang alternative nonlocal search algorithms

  • Most of the fundamentals and major implementing techniques of the BYY harmony learning are developed in the period of 1995 to 2001, for which we provide an outline chronologically in Table 3 featured by the time points at which the major innovative studies started at

Read more

Summary

Introduction

Bayes approach and automatic model selection Learning in an intelligent system is featured by three levels of inverse problems, for which details are referred to Sect. 1 of (Xu 2010a,b). Bayes approach and automatic model selection Learning in an intelligent system is featured by three levels of inverse problems, for which details are referred to Sect. Learning tasks associated with the front level can be viewed from a perspective of learning a mapping x → y, called representative model, by which an observed sample x in a visible domain X is mapped into its corresponding encoding y as a signal or inner code to perform a task of problem solving, such as abstraction, classification, inference and control. (1) One is featured by learning a mapping x → y according to whether a principle is satisfied by the resulted inner encodings of y, while not explicitly taking the other directional mapping y → x in consideration. The other widely studied family is supervised learning by a linear or a nonlinear mapping that makes samples of y to approach the desired target samples

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call