Convolutional Neural Networks (CNN) have become the gold standard in many visual recognition tasks including medical applications. Due to their high variance, however, these models are prone to over-fit the data they are trained on. To mitigate this problem, one of the most common strategies, is to perform data augmentation. Rotation, scaling and translation are common operations. In this work we propose an alternative method to rotation-based data augmentation where the rotation transformation is performed inside the CNN architecture. In each training batch the weights of all convolutional layers are rotated by the same random angle. We validate our proposed method empirically showing its usefulness under different scenarios.
Read full abstract