Abstract

Neural networks have evolved into one of the most critical tools in the field of artificial intelligence. As a kind of shallow feedforward neural network, the broad learning system (BLS) uses a training process based on random and pseudoinverse methods, and it does not need to go through a complete training cycle to obtain new parameters when adding nodes. Instead, it performs rapid update iterations on the basis of existing parameters through a series of dynamic update algorithms, which enables BLS to combine high efficiency and accuracy flexibly. The training strategy of BLS is completely different from the existing mainstream neural network training strategy based on the gradient descent algorithm, and the superiority of the former has been proven in many experiments. This article applies an ingenious method of pseudoinversion to the weight updating process in BLS and employs it as an alternative strategy for the dynamic update algorithms in the original BLS. Theoretical analyses and numerical experiments demonstrate the efficiency and effectiveness of BLS aided with this method. The research presented in this article can be regarded as an extended study of the BLS theory, providing an innovative idea and direction for future research on BLS.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call