Abstract

In this paper, we investigate how to improve the generalization ability of simple convolutional neural networks. Previous work showed that techniques like regularization, transfer learning and data augmentation could improve the generalization ability. We first focused on data augmentation and showed that the generalization ability can be largely improved by augmenting the training samples. However, the increasing amount and difficulty of the training set will lower the speed of machine learning models. In order to speed up the model and keep relatively high performance, we applied Squeeze-and-Excitation block on to our convolutional neural network. Our results showed great improvement by applying data augmentation. The speed of our model is increased after applying the Squeeze-and-Excitation block. Our method improved 6-time convergence time and 37.65 percentage of accuracy, which can validate the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call