Abstract

Deep Convolutional Neural Networks have led to series of breakthroughs in image classification. With increasing demand to run DCNN based models on mobile platforms with minimal computing capabilities and lesser storage space, the challenge is optimizing those DCNN models for lesser computation and smaller memory footprint. This paper presents a highly efficient and modularized Deep Neural Network (DNN) model for image classification, which outperforms state of the art models in terms of both speed and accuracy. The proposed DNN model is constructed by repeating a building block that aggregates a set of transformations with the same topology. In order to make a lighter model, it uses Depthwise Separable convolution, Grouped convolution and identity shortcut connections. It reduces computations approximately by 100M FLOPs in comparison to MobileNet with a slight improvement in accuracy when validated on CIFAR-10, CIFAR-100 and Caltech-256 datasets.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.