Abstract

Since Alex Krizhevsky won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012 competition by building a very intelligent deep convolutional neural network (D-CNNs), more and more researchers have been engaged in the research and development of deep convolutional neural network (D-CNNs). However, recent researches on deep convolutional neural networks are mostly based on ImageNet datasets. The network model based on such a large dataset is mostly blind to increase the number of network layers, ignoring that most data sets in application are far from the order of magnitude of ImageNet datasets. Such deep networks tend to perform poorly in small datasets (CIFAR-10), since deep models are easy to overfitting. In this paper, we’ve applied some of the more efficient methods that have been proposed in recent years to traditional deep convolutional neural networks. We proposed a modified Alex network and used this model to fit CIFAR-10. By adding Batch Normalization, using Dilated Convolution and replacing Fully Connected layer (FC) with Global Average Pooling (GAP), we achieved 8.6% error rate on CIFAR-10 without severe overfitting. Our results show that the deep CNN can be used to fit small datasets with proper modifications and the results are much better than before.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call