Abstract

Recently, a broad learning system (BLS) has been theoretically and experimentally confirmed to be an efficient incremental learning system. To get rid of deep architecture, BLS shares the same architecture and learning mechanism of the well-known functional link neural networks (FLNN), but works in broad learning way on both the randomly mapped features of original features of data and their randomly generated enhancement nodes. As such, BLS often requires a huge heap of hidden nodes to achieve the prescribed or satisfactory performance, which may inevitably cause both overwhelming storage requirement and overfitting phenomenon. In this study, a stacked architecture of broad learning systems called D&BLS is proposed to achieve enhanced performance and simultaneously downsize the system architecture. By boosting the residuals between previous and current layers and simultaneously augmenting the original input space with the outputs of the previous layer as the inputs of current layer, D&BLS stacks several lightweight BLS sub-systems to guarantee stronger feature representation capability and better classification/regression performance. Three fast incremental learning algorithms of D&BLS are also developed, without the need for the whole re-training. Experimental results on some popular datasets demonstrate the effectiveness of D&BLS in the sense of both enhanced performance and reduced system architecture.

Highlights

  • As everyone may know well, over the past decade, deep learning systems have earned great success in many application fields [1, 2] and have been drawing more and more attentions in today’s academic and industrial communities

  • Because the mapped features and enhancement nodes are randomly generated, broad learning system (BLS) often needs a huge heap of enhancement nodes to achieve the prescribed performance, which may inevitably cause both overwhelming storage requirement and overfitting phenomenon, how to downsize a BLS system and simultaneously keep the strong capability of the whole system is becoming an urgent demand

  • The first group about classification is carried out on both five image datasets and 1 UCI [24] classification datasets, the second about regression on ten UCI regression datasets, the third about incremental learning on popular datasets MNIST [25], and the last about running time is compared between BLS and D&BLS

Read more

Summary

Introduction

As everyone may know well, over the past decade, deep learning systems have earned great success in many application fields [1, 2] and have been drawing more and more attentions in today’s academic and industrial communities. Because the training of each sub-system in this incremental case only deals with both the generation of the incremental enhancement nodes and the calculation of the corresponding pseudoinverse, we mainly observe the corresponding steps in Algorithm 2. The training of each sub-system in this incremental case mainly has three stages, namely, the generation of the additional feature nodes and the corresponding additional enhancement nodes and the calculation of the pseudoinverse. The computational complexity of each sub-system in this incremental case mainly contains three stages: the generation of the additional feature nodes and the additional enhancement nodes and the calculation of pseudoinverse. According to Algorithm 4, the computational complexity from step 4 to step 7 about the generation of feature nodes takes O K T Dni di. The number of additional hidden nodes for each sub-system is taken to be 10 and the number of additional input data is set to be 3000

Experimental results
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call