Abstract

Over the last decade, substantial advances have been made in various computer vision technologies and many of them are based on convolutional neural network (CNN) architecture. Typically, CNN is trained by a stochastic gradient descent algorithm using Back-Propagation (BP) but the training process is adversely affected by slow convergence and the need for extensive parameter tuning. In this paper, we propose a new CNN architecture and training algorithm based on an Extreme Learning Machine (ELM) to overcome these drawbacks. The proposed training algorithm is a layer-wise training method for CNN, and uses an alternating strategy of random convolutional filters and semi-supervised filters to combine the advantages of both approaches. On each semi-supervised layer, the CNN efficiently solves a convex optimization problem based on nonlinear random projection. It is faster and requires less human effort than BP-based training. We experimentally validated the proposed method using a well-known character and object recognition benchmark. In our experiment, the performance of our method is comparable to approaches based on deep features and has higher accuracy than other unsupervised feature-learning methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call