Abstract

In recent years, deep convolutional neural network models have been increasingly used in various computer vision tasks, like plate number recognition, object recognition, automatic digit recognition, and medical applications supporting diagnosis by signals or images. A disadvantage of these networks is the long training time. It can take days to adjust weights with iterative methods based on gradient descent. This can be an obstacle in applications that need frequent training or in real time. Fast convolutional networks avoid gradient-based methods by efficiently defining filters in feature extraction and weights in classification. The issue is how to set the convolutional filter banks, since they are not learned by the backpropagation of gradients? In this work we propose a deep fast convolutional neural network based on extreme learning machine and a fixed bank of filters. We demonstrate that our model is feasible to be used in cost-effective non-specialized computer hardware, performing the training task faster than models running on GPUs. Results were generated on EMNIST dataset representing the widely studied problem of digit recognition. We provide a deep convolutional extreme learning machine (CELM) with two feature extraction stages and combinations of selected filters. For the proposed network, we find that the empirical generalization error is explained by the error model based on a theorem by Rahimi and Retch. In comparison to the state-of-the-art, the proposed network resulted in superior accuracy as well as competitive training time, even in relation to approaches that employ processing in GPUs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call