Abstract

A unified statistical learning approach called Bayesian Ying-Yang (BYY) system and theory has been developed by the present author in recent years. It functions as a general theory for supervised, unsupervised learning and semi-unsupervised extension for parameter learning, regularization, complexity selection, and architecture design. This paper reports several new advances: 1) the Ying dominated BYY learning are further discussed; 2) a general stochastic implementing procedure with detailed algorithms is proposed to overcome the difficulty encountered in the integral and summation operations such that the parameter learning and model selection become always implementable not only for the Yang dominated but also for the Ying dominated BYY learning; and 3) developments on BYY three-layer forward learning are provided, including new and simple criteria for the selection of the best hidden unit number and for the regularization of parameter learning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.