Abstract

Whilst it is a trivial task for a human vision system to recognize and detect objects with good accuracy, making computer vision algorithms achieve the same feat remains an active area of research. For a human vision system, objects seen once are recognized with high accuracy despite alterations to its appearance by various transformations such as rotations, translations, scale, distortions and occlusion making it a state-of-the-art spatially invariant biological vision system. To make computer algorithms such as Convolutional Neural Networks (CNNs) spatially invariant one popular practice is to introduce variations in the data set through data augmentation. This achieves good results but comes with increased computation cost. In this paper, we address rotation transformation and instead of using data augmentation we propose a novel method that allows CNNs to improve rotation invariance by augmentation of feature maps. This is achieved by creating a rotation transformer layer called Rotation Invariance Transformer (RiT) that can be placed at the output end of a convolution layer. Incoming features are rotated by a given set of rotation parameters which are then passed to the next layer. We test our technique on benchmark CIFAR10 and MNIST datasets in a setting where our RiT layer is placed between the feature extraction and classification layers of the CNN. Our results show promising improvements in the networks ability to be rotation invariant across classes with no increase in model parameters.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.