Abstract

Convolutional Neural Networks (CNNs) are the most important deep learning algorithms to classify images based on their visual features. CNNs architectures are made of convolution, pooling and fully-connected (FC) layers and their corresponding parameters, which affect classification performance significantly. Convolution parameters optimization for CNNs, referred as CPOCNN, is proposed in this paper. To the best of our knowledge, this is the first optimization model which assigns adaptive upper-bounds of convolution parameters depends on data dimension in current layer and number of remained layers to reach the output layer. For this task, a comprehensive mathematical model is presented for constant structures of CNNs and it is proven that more optimization space is explored in comparison with all state-of-the-art methods. In optimization process, number of convolution filters and type of pooling filters are selected randomly; dimension of pooling filters, zero-padding and stride are considered constant. CPOCNN has been evaluated on 7 publicly available datasets and compared with 53 competitive CNN models with constant and optimized structures. Interested results showed that CPOCNN not only outperforms state-of-the-art CNN methods but also enriches weak CNN models by improving their accuracies more than 35%. Source Code of this paper is available online at Github.11https://github.com/kohzadi/CHOCNN.git.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call