Abstract

Fine-grained visual classification (FGVC) is challenging due to the interclass similarity and intraclass variation in datasets. In this work, we explore the great merit of complex values in introducing an imaginary part for modeling data uncertainty (e.g., different points on the complex plane can describe the same state) and graph convolutional networks (GCNs) in learning interdependently among classes to simultaneously tackle the above two major challenges. To the end, we propose a novel approach, termed text-assisted complex-valued fusion network (TA-CFN). Specifically, we expand each feature from 1-D real values to 2-D complex value by disassembling feature maps, thereby enabling the extension of traditional deep convolutional neural networks over the complex domain. Then, we fuse the real and imaginary parts of complex features through complex projection and modulus operation. Finally, we build an undirected graph over the object labels with the assistance of a text corpus, and a GCN is learned to map this graph into a set of classifiers. The benefits are in two folds: 1) complex features allow for a richer algebraic structure to better model the large variation within the same category and 2) leveraging the interclass dependencies brought by the GCN to capture key factors of the slight variation among different categories. We conduct extensive experiments to verify that our proposed model can achieve the state-of-the-art performance on two widely used FGVC datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call