Abstract
The convolution neural networks have revolutionized the computer vision domain. It has proven to be a dominant technology to carry out tasks such as image classification, semantic segmentation, and object detection. The convolution neural networks surpass the performance of the existing algorithms such as SIFT, HOG, etcetera. Where, instead of manually engineering the features, supervised learning help to learn the essential low-level and high-level features necessary for classifications. The convolution neural networks have become a popular tool to counter computer vision problems. However, it is computationally, and memory intensive to train and deploy the network because of the model size of a deep convolution neural networks. However, the research in the field of design space exploration (DSE) of neural networks and compression techniques to develop compact architectures, have made convolution neural networks memory and computationally efficient. These techniques have also improved the feasibility of convolution neural network for deployment on embedded targets. The paper explores the concept of compact convolution filters to reduce the number of parameters in a convolution neural network. The intuition behind the approach is that replacing convolution filters with a stack of compact convolution filters helps in developing a compact architecture with competitive accuracy. This paper explores the fire module a compact convolution filter and proposes a method of recreating a state-of-the-art architecture VGG-16 using the fire modules to develop a compact architecture, which is further trained on the CIFAR-10 dataset and deployed on a real-time embedded platform known as Bluebox 2.0 by NXP using RTMaps software framework.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.