Abstract

Deep network in network (DNIN) model is an efficient instance and an important extension of the convolutional neural network (CNN) consisting of alternating convolutional layers and pooling layers. In this model, a multilayer perceptron (MLP), a nonlinear function, is exploited to replace the linear filter for convolution. Increasing the depth of DNIN can also help improve classification accuracy while its formation becomes more difficult, learning time gets slower, and accuracy becomes saturated and then degrades. This paper presents a new deep residual network in network (DrNIN) model that represents a deeper model of DNIN. This model represents an interesting architecture for on-chip implementations on FPGAs. In fact, it can be applied to a variety of image recognition applications. This model has a homogeneous and multilength architecture with the hyperparameter “L” (“L” defines the model length). In this paper, we will apply the residual learning framework to DNIN and we will explicitly reformulate convolutional layers as residual learning functions to solve the vanishing gradient problem and facilitate and speed up the learning process. We will provide a comprehensive study showing that DrNIN models can gain accuracy from a significantly increased depth. On the CIFAR-10 dataset, we evaluate the proposed models with a depth of up to L = 5 DrMLPconv layers, 1.66x deeper than DNIN. The experimental results demonstrate the efficiency of the proposed method and its role in providing the model with a greater capacity to represent features and thus leading to better recognition performance.

Highlights

  • With the increase in the depth of the Deep network in network (DNIN) model, a problem of degrading the training precision has been unexpectedly exposed; the accuracy is saturated and degrades rapidly. is degradation is not caused by overadjustment

  • We address the degradation problem by introducing an efficient deep neural network architecture for computer vision, deep residual network in network, which takes its name from the deep network in the network article [1] in conjunction with the famous “deep residual learning for image recognition” [14]

  • E advantages of the architecture are experimentally verified on the CIFAR-10 classification challenges. e contributions of this work are as follows: Computational Intelligence and Neuroscience (i) We propose a new residual architecture for the DMLPconv layers which allows to have deep residual network in network (DrNIN) models with considerably improved performance (ii) We propose a new way to use batch normalization and dropout in the DrNIN model in order to regularize and normalize them properly and avoid overfitting during training (iii) We present a detailed experimental study of multilength deep model architectures that examines in depth several important aspects of DrMLPconv layers (iv) we show that our proposed DrNIN architectures obtain interesting results on CIFAR-10 considerably improving the precision and training speed of DrNIN

Read more

Summary

Introduction

With the increase in the depth of the DNIN model, a problem of degrading the training precision has been unexpectedly exposed; the accuracy is saturated and degrades rapidly. is degradation is not caused by overadjustment. E contributions of this work are as follows: Computational Intelligence and Neuroscience (i) We propose a new residual architecture for the DMLPconv layers which allows to have DrNIN models with considerably improved performance (ii) We propose a new way to use batch normalization and dropout in the DrNIN model in order to regularize and normalize them properly and avoid overfitting during training (iii) We present a detailed experimental study of multilength deep model architectures that examines in depth several important aspects of DrMLPconv layers (iv) we show that our proposed DrNIN architectures obtain interesting results on CIFAR-10 considerably improving the precision and training speed of DrNIN e rest of this article is organized as follows: Section 2 presents an overview of related work.

Related Works
Proposed Model
Experimental Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call