Random Vector Functional Link (RVFL) is a widely used learning technique due to its less computational complexity, fast learning speed, and ease of implementation. However, generalization ability of RVFL is not good because its randomly generated parameters in the hidden layer makes input data distribution vulnerable to saturated regime of an activation function. For making data distribution evenly distributed within the hidden layer, this paper proposes a robust and efficient affine-transformation-based RVFL approach, termed as RT-ATRVFL, with optimized parameters. The proposed RT-ATRVFL, introduces sparse and shallow network in RVFL, that uses a subset of hidden layer structure, and obtains optimized and robust affine parameters for an activation function. This way it not only avoids saturated regime, but also learns the non-linearity more effectively and efficiently. This paper also investigates the different variants of the proposed approaches viz., RT-ATRVFL-ORTHO, RT-ATRVFL-CLOG, and RT-ATRVFL-CLOG-ORTHO using orthogonalization and cloglogm activation functions. For a thorough investigation of the proposed approaches, we conduct extensive experiments considering 28 benchmark classification datasets for a set of [60,2000] Monte Carlo runs. We show that our proposed approaches are more generalized and reliable, and outperform the affine-transformation-based extreme learning machine (ATELM) and its variants in terms of accuracy and computational time. Also, analysis of the results through well-accepted metrics such as Levene’s test, inter quartile range, mean absolute deviation and standard deviation confirms that proposed RT-ATRVFL and its other introduced variants are more robust than their respective counterparts. Results of two well-known statistical significance tests: Wilcoxon test and Friedman ranking, and the time complexity analysis also establishes the superiority of the proposed RT-ATRVFL approach and its other variants over their respective counterparts.
Read full abstract