Abstract
The incremental version of randomized neural networks provides a greedy constructive algorithm for the shallow network, which adds new nodes through different stochastic methods rather than gradient optimization. However, the potential of the random incremental mechanism is still underutilized in deep structures. To address this research gap, we propose an unsupervised algorithm termed the incremental randomization-based autoencoder (IR-AE) for greedy feature learning, which applies an integrated optimized constructive algorithm to train the feature extractor. Using IR-AE as a hierarchical stacked block, we synthesize the deep incremental random vector functional-link (DI-RVFL) network that builds a deep structure with overall feature-output links through a feedforward approach. Furthermore, it is a novel data-driven initialization to implement the feedforward constructive sketch (CoSketch) as a pre-trained model for multi-layer perceptron. The simulation results empirically demonstrate that the proposed IR-AE can realize a higher reconstruction efficiency than AE and randomization-based AE. Moreover, DI-RVFL shows the advantages of deep structures in higher-level feature extraction compared to other stacked random structures. The overall performance of deep RVFLs outperforms those of multi-layer extreme learning machines. As data-driven initialization, CoSketch significantly improves the convergence performance of gradient descent.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.