Abstract

Extreme learning machine (ELM) and its variants have been widely used in the field of object recognition and other complex classification tasks. Traditional deep learning architectures like Convolutional Neural Network (CNN) are capable of extracting high-level features, which are the key for the models to make right decisions. However, traditional deep architectures are confronted with solving a tough, non-convex optimization problem, which is a time-consuming process. In this paper, we propose two hierarchical models, i.e., Random Recursive Constrained ELM (R2CELM) and Random Recursive Local- Receptive-Fields-Based ELM (R2ELM-LRF), which are constructed by stacking with CELM or ELM-LRF, respectively. Besides, inspired by the stacking generalization philosophy, random projection and kernelization are incorporated as their constitutive elements. R2CELM and R2ELM-LRF not only fully inherit the merits of ELM, but also take advantage of the superiority of CELM and ELM-LRF in the field of image recognition, respectively. The essence of CELM is to constrain the weight vectors from the input layer to the hidden layer to be consistent with the directions from one class to another class, while ELM-LRF is adept at exploiting the local structures in images through many local receptive fields. In the empirical results, R2CELM and R2ELM-LRF demonstrate their better performance in testing accuracy on the six benchmark image recognition datasets, compared with their basic learners and other state-of-the-art algorithms. Moreover, the proposed two deep ELM models need less training time when compared with traditional Deep Neural Network (DNN) based models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call