Abstract

Deep convolutional neural network (DCNN) is an influential tool for solving various problems in machine learning and computer vision. Recurrent connectivity is a very important component of visual information processing within the human brain. The idea of recurrent connectivity is rarely applied within convolutional layers, the exceptions being a couple of DCNN architectures including recurrent convolutional neural network (RCNN) in Liang and Hu (in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2015) and Pinheiro and Collobert (in: ICML, 2014). On the other hand, the Inception network architecture has become popular among the computer vision community (Szegedy et al. in Inception-v4, Inception-ResNet and the impact of Residual connections on learning, 2016. arXiv:1602.07261 ). In this paper, we introduce a deep learning architecture called the Inception Recurrent Convolutional Neural Network (IRCNN), which utilizes the power of an Inception network combined with recurrent convolutional layers. Although the inputs are static, the recurrent property plays a huge role in modeling the contextual information for object recognition tasks and thus improves overall training and testing accuracy. In addition, this proposed architecture generalizes both Inception and RCNN models. We have empirically evaluated the recognition performance of the proposed IRCNN model using different benchmark datasets such as MNIST, CIFAR-10, CIFAR-100, and SVHN. The experimental results show higher recognition accuracy when compared to most of the popular DCNNs including the RCNN. Furthermore, we have investigated IRCNN performance against equivalent Inception networks (EIN) and equivalent Inception–Residual networks (EIRN) using the CIFAR-100 dataset. When using the augmented CIFAR-100 dataset, we achieved about 3.5%, 3.47% and 2.54% improvement in classification accuracy compared to the RCNN, EIN, and EIRN respectively. We have also conducted experiment on Tiny ImageNet-200 dataset with IRCNN, EIN, EIRN, RCNN, DenseNet in Huang et al. (Densely connected convolutional networks, 2016. arXiv:1608.06993 ), and DenseNet with Recurrent Convolution Layer, where the proposed model shows significantly better performance against baseline models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.