Deep Stochastic Configuration Networks (DeepSCN) represent a class of randomized learner models distinguished by their universal approximation capabilities, rapid learning processes, and ease of implementation. These models surpass other randomized learners due to an embedded supervisory mechanism. Although the feature matrix is meticulously constructed, the predictive accuracy of DeepSCN also critically depends on the quality of the model parameters obtained from solving the corresponding system of linear equations. As the number of hidden neurons increases, the resulting expansion of the feature matrix renders matrix inversion computationally expensive, potentially unstable, and memory-intensive. This study addresses these challenges by integrating randomized algorithm-based low-rank matrix approximation to compute model parameters efficiently. A theoretical error bound for the obtained model parameters is derived, ensuring precision and accuracy. The effectiveness of the algorithm is validated on benchmark datasets for both classification and regression tasks. The results demonstrate that this integration significantly enhances the reliability and stability of DeepSCN, improving performance and scalability while greatly reducing computational demands. The method maintains stability despite substantial variations in the number of hidden neurons and mitigates overfitting caused by matrix inversion, ensuring stable and reliable training outcomes. This study advances DeepSCN training methodologies and large-scale data processing.
Read full abstract