Abstract

Stochastic configuration networks (SCNs) employ a supervisory mechanism to assignhidden-node parameters in the incremental construction process. SCNs offer the advantages of practical implementation, fast convergence, and better generalization performance. However, due to its high computational cost and the scalability of numerical algorithms for the least square technique, it is rather limited for dealing with enormous amounts of data. This paper proposes fast SCNs (F-SCNs), whose output weights are determined using orthogonal matrix Q and upper triangular matrix R decomposition. The network can iteratively update the output weights utilizing the output information from the predecessor node using this incremental technique. We investigated the computational complexity of SCNs and F-SCNs and demonstrated that F-SCNs are suitable for scenarios in which the hidden layer has a significant number of nodes. We evaluated the proposed method on four real-world regression datasets; experimental results show that our method has notable advantages in terms of speed and effectiveness of learning.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.