Abstract

Representing images and videos with Symmetric Positive Definite (SPD) matrices and utilizing the intrinsic Riemannian geometry of the resulting manifold has proved successful in many computer vision tasks. Since SPD matrices lie in a nonlinear space known as a Riemannian manifold, researchers have recently shown a growing interest in learning discriminative SPD matrices with appropriate Riemannian metrics. However, the computational complexity of analyzing high-dimensional SPD matrices is nonnegligible in practical applications. Inspired by the theory of nonparametric estimation, we propose a probability distribution-based approach to overcome this drawback by learning a mapping from the manifold of high-dimensional SPD matrices to a manifold of lower-dimension, which can be expressed as an optimization problem on the Grassmann manifold. Specifically, we perform the dimensionality reduction for high-dimensional SPD matrices with popular Riemannian metrics and an affinity matrix constructed using an estimated probability distribution function (PDF) to achieve maximum class separability. The evaluation of several classification tasks shows the competitiveness of the proposed approach compared with state-of-the-art methods.

Highlights

  • I N recent years, Symmetric Positive Definite (SPD) matrices have received significant attention in the field of computer vision, due to their capacity to characterize data variations

  • We propose an improved manifold to manifold dimensionality reduction approach for SPD matrices that shares the same scheme as [17]–[19], yet we enhance the discriminative power by exploiting the probability distribution of SPD matrices to define a weighted affinity measure for the optimization

  • EVALUATIONS we study the effectiveness of the proposed probability distribution-based SPD dimensionality reduction (PDSPDDR) approach by presenting the results of major experiments involving three different computer vision tasks: object classification, material categorization, face recognition, and scene classification

Read more

Summary

INTRODUCTION

I N recent years, Symmetric Positive Definite (SPD) matrices have received significant attention in the field of computer vision, due to their capacity to characterize data variations. In an attempt to generalize algorithms designed for Euclidean spaces to Riemannian manifolds, several researchers [2], [3], [13] have used popular Riemannian metrics [8], [9], [14] to account for the geometry of SPD matrices These works can be categorized into two major types: The first aims to obtain a locally flattened approximation of the manifold in the tangent space where linear geometry applies [6], [7], [15]; the second category of methods embed the manifold into a high-dimensional Reproducing Kernel Hilbert Space (RKHS) with an implicit map, and use kernel-based methods [2], [3], [13], [16]. The number of samples drawn from such selected classes is determined by a control parameter

PRELIMINARIES
DIMENSIONALITY REDUCTION FOR SPD MATRICES
AFFINITY MATRIX
COVARIANCE DESCRIPTORS
EVALUATIONS
IMPLEMENTATION DETAILS
MATERIAL CATEGORIZATION
FACE RECOGNITION
Findings
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call