Abstract

In this paper, we propose an efficient algorithm to accelerate the existing Broad Learning System (BLS) algorithm for new added nodes. The existing BLS algorithm computes the output weights from the pseudoinverse with the ridge regression approximation, and updates the pseudoinverse iteratively. As a comparison, the proposed BLS algorithm computes the output weights from the inverse Cholesky factor of the Hermitian matrix in the calculation of the pseudoinverse, and updates the inverse Cholesky factor efficiently. Since the Hermitian matrix in the definition of the pseudoinverse is smaller than the pseudoinverse, the proposed BLS algorithm can reduce the computational complexity, and usually requires less than $\frac {2}{3}$ of complexities with respect to the existing BLS algorithm. Our experiments on the Modified National Institute of Standards and Technology (MNIST) dataset show that the speedups in accumulative training time and each additional training time of the proposed BLS over the existing BLS are 24.81%~ 37.99% and 36.45%~ 58.96%, respectively, and the speedup in total training time is 37.99%. In our experiments, the proposed BLS and the existing BLS both achieve the same testing accuracy when the tiny differences (≤ 0.05%) caused by the numerical errors are neglected, and the above-mentioned tiny differences and numerical errors become zeroes and ignorable, respectively, when the ridge parameter is not too small.

Highlights

  • Deep neural networks (DNNs) have achieved great success in a lot of applications [1], [2], such as image recognition [3] and speech recognition [4]

  • The connections of all the mapped features Zn and the enhancement nodes Hm are fed into the output by OY = [Zn|Hm] Wmn = Amn Wmn, where the output weights Wmn are computed from the ridge regression of the pseudoinverse by Wmn = [Zn|Hm]+Y = (Amn )+Y

  • In the incremental learning for added nodes [19], the stepwise updating algorithm is utilized to update the pseudoinverse of the column-partitioned matrix Ak+q iteratively by

Read more

Summary

INTRODUCTION

Deep neural networks (DNNs) have achieved great success in a lot of applications [1], [2], such as image recognition [3] and speech recognition [4]. More complex representations in the form of cascaded connections between the enhancement nodes and feature nodes are developed in [20], and several variants have been proposed successively, including fuzzy BLS [24] and recurrent BLS [23]. For such variants of BLS, it is important to further reduce the complexity costs of their incremental learning algorithms because of the increased parameters caused by the complex structures.

EXISTING INCREMENTAL BLS ON ADDED NODES
INCREMENTAL LEARNING FOR BROAD EXPANSION IN NODES
RIDGE REGRESSION APPROXIMATION OF GENERALIZED INVERSE
DISCUSSION ON REDUCING THE COMPLEXITY OF BLS
THE DERIVATION OF THE PROPOSED EQUATIONS
COMPLEXITY COMPARISON AND NUMERICAL EXPERIMENTS
NUMERICAL EXPERIMENTS
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call