Abstract

Neural networks have evolved into one of the most critical tools in the field of artificial intelligence. As a kind of shallow feedforward neural network, the broad learning system (BLS) uses a training process based on random and pseudoinverse methods, and it does not need to go through a complete training cycle to obtain new parameters when adding nodes. Instead, it performs rapid update iterations on the basis of existing parameters through a series of dynamic update algorithms, which enables BLS to combine high efficiency and accuracy flexibly. The training strategy of BLS is completely different from the existing mainstream neural network training strategy based on the gradient descent algorithm, and the superiority of the former has been proven in many experiments. This article applies an ingenious method of pseudoinversion to the weight updating process in BLS and employs it as an alternative strategy for the dynamic update algorithms in the original BLS. Theoretical analyses and numerical experiments demonstrate the efficiency and effectiveness of BLS aided with this method. The research presented in this article can be regarded as an extended study of the BLS theory, providing an innovative idea and direction for future research on BLS.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.