Abstract

Convolutional neural networks (CNNs) have been successfully applied in many computer vision applications [1], especially in image classification tasks, where most of the structures have been designed manually. With the aid of skip connection and dense connection, the depths of the models are becoming “deeper” and the filters of layers are getting “wider” in order to tackle the challenge of large-scale datasets. However, large-scale models in convolutional layers become inefficient due to the redundant channels from input feature maps. In this paper, we aim to automatically optimize the topology of the DenseNet, in which unnecessary convolutional kernels are reduced. To achieve this, we present a training pipeline that generates the network structure using a genetic algorithm. We first propose two encoding methods that can represent the structure of the model using a fixed-length binary string. A three-step based evolutionary process consisting of selection, crossover, and mutation is proposed to optimize the structure. We also present a pretrained weight inheritance method which can largely reduce the total time consumption of the genetic process. Experimental results have demonstrated that our proposed model can achieve comparable accuracy to the state-of-the-art models, across a wide range of image recognition and classification datasets, whilst significantly reducing the number of parameters.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.