This work investigates the problem of efficiently learning discriminative low-dimensional (LD) representations of multiclass image objects. We propose a generic end-to-end approach that jointly optimizes sparse dictionary and convolutions for learning LOW-dimensional discriminative image representations, named SparConvLow, taking advantage of convolutional neural networks (CNNs), dictionary learning, and orthogonal projections. The whole learning process can be summarized as follows. First, a CNN module is employed to extract high-dimensional (HD) preliminary convolutional features. Second, to avoid the high computational cost of direct sparse coding on HD CNN features, we learn sparse representation (SR) over a task-driven dictionary in the space with the feature being orthogonally projected. We then exploit the discriminative projection on SR. The whole learning process is consistently treated as an end-to-end joint optimization problem of trace quotient maximization. The cost function is well-defined on the product of the CNN parameters space, the Stiefel manifold, the Oblique manifold, and the Grassmann manifold. By using the explicit gradient delivery, the cost function is optimized via a geometrical stochastic gradient descent (SGD) algorithm along with the chain rule and the backpropagation. The experimental results show that the proposed method can achieve a highly competitive performance with the state-of-the-art (SOTA) image classification, object categorization, and face recognition methods, under both supervised and semi-supervised settings. The code is available at https://github.com/MVPR-Group/SparConvLow.
Read full abstract