• We propose a rotation invariant Gabor convolutional neural network (RIGCN). • We learn Gabor-guided convolutional features in the Siamese network architecture . • We explore a convolutional fusion operator to obtain rotation invariant features. • We show the effectiveness of RIGCN for rotation invariant image classification . Gabor filters have been recently integrated with deep convolutional neural networks to learn better features with fewer model parameters. However, during feature learning the rotation invariance is not well addressed. In this paper, we propose a rotation invariant Gabor convolutional neural network (RIGCN) for image classification. First, we transform each input image to generate multiple rotated image instances and feed them into a weight-sharing Siamese network architecture to learn Gabor-guided deep convolutional features. Then, we compute the maximum and average feature responses from all the rotated instances of the same input image and send them into a convolutional fusion module to obtain a rotation invariant image representation. Finally, we use the cross-entropy loss for image classification. In our method, the use of Siamese architecture enables us to obtain rotation invariant features from rotated image instances. The use of convolutional fusion operator enables us to obtain richer statistical features with high efficiency. Experimental results on several benchmark datasets demonstrate the effectiveness of RIGCN for rotation invariant image classification.
Read full abstract