Abstract

Ensuring the prediction accuracy of a learning algorithm on a theoretical basis is crucial and necessary for building the reliability of the learning algorithm. This paper analyzes prediction error obtained through the least square estimation in the generalized extreme learning machine (GELM), which applies the limiting behavior of the Moore–Penrose generalized inverse (M–P GI) to the output matrix of ELM. ELM is the random vector functional link (RVFL) network without direct input to output links Specifically, we analyze tail probabilities associated with upper and lower bounds to the error expressed by norms. The analysis employs the concepts of the L2 norm, the Frobenius norm, the stable rank, and the M–P GI. The coverage of theoretical analysis extends to the RVFL network. In addition, a criterion for more precise bounds of prediction errors that may give stochastically better network environments is provided. The analysis is applied to simple examples and large-size datasets to illustrate the procedure and verify the analysis and execution speed with big data. Based on this study, we can immediately obtain the upper and lower bounds of prediction errors and their associated tail probabilities through matrices calculations appearing in the GELM and RVFL. This analysis provides criteria for the reliability of the learning performance of a network in real-time and for network structure that enables obtaining better performance reliability. This analysis can be applied in various areas where the ELM and RVFL are adopted. The proposed analytical method will guide the theoretical analysis of errors occurring in DNNs, which employ a gradient descent algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call