Abstract

The M-P (Moore-Penrose) pseudoinverse has as a key application the computation of least-squares solutions of inconsistent systems of linear equations. Irrespective of whether a given input matrix is sparse, its M-P pseudoinverse can be dense, potentially leading to high computational burden, especially when we are dealing with high-dimensional matrices. The M-P pseudoinverse is uniquely characterized by four properties, but only two of them need to be satisfied for the computation of least-squares solutions. Fampa and Lee (2018) and Xu, Fampa, Lee, and Ponte (2019) propose local-search procedures to construct sparse block-structured generalized inverses that satisfy the two key M-P properties, plus one more (the so-called reflexive property). That additional M-P property is equivalent to imposing a minimum-rank condition on the generalized inverse. (Vector) 1-norm minimization is used to induce sparsity and, importantly, to keep the magnitudes of entries under control for the generalized-inverses constructed. Here, we investigate the trade-off between low 1-norm and low rank for generalized inverses that can be used in the computation of least-squares solutions. We propose several algorithmic approaches that start from a $1$-norm minimizing generalized inverse that satisfies the two key M-P properties, and gradually decrease its rank, by iteratively imposing the reflexive property. The algorithms iterate until the generalized inverse has the least possible rank. During the iterations, we produce intermediate solutions, trading off low 1-norm (and typically high sparsity) against low rank.

Highlights

  • The well-known M-P (Moore–Penrose) pseudoinverse is used in several linear-algebra applications, as for example, to compute least-squares solutions of inconsistent systems of linear equations

  • If A = U ΣV is the real singular value decomposition of A ∈ Rm×n, the M-P pseudoinverse of A can be defined as A† := V Σ†U ∈ Rn×m, where the diagonal matrix Σ† has the shape of the transpose of the diagonal matrix Σ, and is derived from Σ by taking reciprocals of the non-zero elements of Σ

  • We investigate the trade-off between 1-norm minimization and low rank of H, by iteratively imposing P2 to problem (1)

Read more

Summary

Introduction

The well-known M-P (Moore–Penrose) pseudoinverse is used in several linear-algebra applications, as for example, to compute least-squares solutions of inconsistent systems of linear equations. The first four proposed algorithms use an interesting feature that is specific to our problem, of constructing ah-symmetric generalized inverses: the equivalence between satisfaction of P2 and the least-rank condition. If not considering the particularity of our problem, given by the equivalence between P2 and the least-rank condition, a standard approach to balance objectives of having low 1-norm and low rank when constructing a matrix, is to solve a convex optimization problem where the nuclear norm is employed as a surrogate for the rank of the matrix, and the objective is a weighted combination of 1-norm and nuclear norm This problem can be recast as a semidefinite programming (SDP) problem and has been widely investigated in the literature (see [1, 9, 7], for example). Using the nuclear norm as a surrogate for rank, we lose a nice feature of our other approaches; while the nuclearnorm approach will converge to an ah-symmetric generalized inverse that is reflexive, we lose the guarantee that it has minimum 1-norm among all such ah-symmetric reflexive generalized inverses

Cutting-plane method for P2
Augmented Lagrangian method: dualizing P2
Penalty method
Nuclear-norm method
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call