Abstract

In order to overcome the extremely time-consuming drawback of deep learning (DL), broad learning system (BLS) was proposed as an alternative method. This model is simple, fast, and easy to update. To ensure the fitting and generalization ability of BLS, the hidden layer neurons are often set too many, in fact, a lot of neurons are not needed. Greedy BLS (GBLS) is proposed in this paper to deal with the redundancy of the hidden layer in BLS from another perspective. Different from BLS, the structure of GBLS can be seen as a combination of unsupervised multi-layer feature representation and supervised classification or regression. It trains with a greedy learning scheme, performs principal component analysis (PCA) on the previous hidden layer to form a set of compressed nodes, which are transformed into enhancement nodes and then activated by nonlinear functions. The new hidden layer is composed of all newly generated compressed nodes and enhancement nodes, and so on. The last hidden layer of the network contains the higher-order and abstract essential features of the original data, which is connected to the output layer. Each time a new layer is added to the model, and there is no need to retrain from the beginning, only the previous layer is trained. Experimental results demonstrate that the proposed GBLS model outperforms BLS both in classification and regression.

Highlights

  • In recent years, neural networks (NNs) have been widely used in a series of challenging fields such as image recognition [1], computer vision [2], [3] and large-scale data processing [4], [5]

  • PRELIMINARY WORK we briefly introduce several basic concepts, including standard broad learning system (BLS) and the theory of principal component analysis

  • It is worth noting that only the training set can be dimensionalized and the current conversion matrix is saved for dimensionality reduction of testing set, which is because the latter is unknowable to us until the model is generated, so we cannot use any information about the testing set

Read more

Summary

INTRODUCTION

Neural networks (NNs) have been widely used in a series of challenging fields such as image recognition [1], computer vision [2], [3] and large-scale data processing [4], [5]. To solve this problem, Chen proposed broad learning system (BLS) [7] in 2017 and proved that it has universal approximation capabilities [8]. In order to obtain acceptable performance on more complex data sets, Liu et al [18] used the K-means clustering algorithm as an improved feature extraction method to improve the fitting ability of the model. In order to fully learn the information of input data and ensure the function approximation and generalization ability of the system, BLS often sets too many nodes in its hidden layer, and partial nodes are unnecessary.

PRELIMINARY WORK
CASES STUDY
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call