Abstract

Convolutional Neural Networks (CNN) have become the gold standard in many visual recognition tasks including medical applications. Due to their high variance, however, these models are prone to over-fit the data they are trained on. To mitigate this problem, one of the most common strategies, is to perform data augmentation. Rotation, scaling and translation are common operations. In this work we propose an alternative method to rotation-based data augmentation where the rotation transformation is performed inside the CNN architecture. In each training batch the weights of all convolutional layers are rotated by the same random angle. We validate our proposed method empirically showing its usefulness under different scenarios.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.