Abstract

The recent advances in convolutional neural networks (CNNs) have used for image classification to achieve remarkable results. Different fields of image datasets will need different CNN architectures to achieve exceptional performance. However, designing a good CNN architecture is a computationally expensive task and requires expert knowledge. In this paper, we propose an effective framework to solve different image classification tasks using a convolutional neural architecture search (CNAS). The framework is inspired by current research on NAS, which automatically learns the best architecture for a specific training dataset, such as MNIST and CIFAR-10. Many search algorithms have been proposed for implementing NAS; however, insufficient attention has been paid to the selection of primitive operations (POs) in the search space. We propose a more efficient search space for learning the CNN architecture. Our search algorithm is based on Darts (a differential architecture search method), but it considers different numbers of intermediate nodes and replaces some unused POs by channel shuffle operation and squeeze-and-excitation operation. We achieve a better performance than Darts on both the CIFAR10/CIFA100 and Tiny-ImageNet datasets. We retain the none operation in deriving the architecture. The performance of the model has slightly decreased, but the number of architecture parameters has been reduced by approximately 40%. To balance the performance and the number of architecture parameters, the framework can learn a dense architecture for high-performance machines, such as servers, but a sparse architecture for resource-constrained devices, such as embedded systems or mobile devices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call