Abstract

After a very fast and efficient discriminative broad learning system (BLS) that takes advantage of flatted structure and incremental learning has been developed, here, a mathematical proof of the universal approximation property of BLS is provided. In addition, the framework of several BLS variants with their mathematical modeling is given. The variations include cascade, recurrent, and broad-deep combination structures. From the experimental results, the BLS and its variations outperform several exist learning algorithms on regression performance over function approximation, time series prediction, and face recognition databases. In addition, experiments on the extremely challenging data set, such as MS-Celeb-1M, are given. Compared with other convolutional networks, the effectiveness and efficiency of the variants of BLS are demonstrated.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call