The two papers in this issue have to do with matrices and sparsity—but from different points of view. Sparsity, in the first paper, means many zero elements in the matrix, while in the second paper it refers to many zero singular values, i.e., low rank. The context of the first paper, “On the Block Triangular Form of Symmetric Matrices” by Iain Duff and Bora Uçar, is the solution of linear systems of equations $Ax = b$ whose coefficient matrix A is sparse, i.e., has many zero elements. A direct method, such as Gaussian elimination, becomes more efficient if one can first permute the rows and columns of A into block triangular form. The classical method for permuting a matrix to block triangular form is due to Dulmage and Mendelsohn and dates back to 1963. The idea is to represent the matrix A as a bipartite graph whose nodes are columns and rows of A, and whose edges correspond to nonzero elements of A, and then to determine a matching of maximum cardinality in this graph. That means determining a maximal number of nonzero elements no two of which belong to the same row or column. Duff and Uçar analyze the block triangular form for a particular class of square matrices A: These matrices are structurally symmetric, i.e., their zero/nonzero structure is symmetric; and they can be structurally rank deficient, i.e., any rank deficiency is evident from the zero/nonzero structure of A. This paper illustrates nicely how graph theory can contribute to improving the efficiency of sparse matrix methods. The paper should appeal to those who need to solve large sparse linear systems, as well as those interested in graph theory. The second paper, “Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization” by Benjamin Recht, Maryam Fazel, and Pablo Parrilo, has applications to model reduction and system identification, machine learning, and image compression, just to name a few. Given a set of affine constraints for a matrix, the problem is to find a matrix of minimum rank that satisfies these constraints. The authors attack this hard nonconvex optimization problem by replacing it with a convex approximation: Instead of minimizing the rank, they minimize the sum of the singular values, the so-called nuclear norm. Nuclear norm minimization thus extends the compressed sensing framework from finding sparse vectors via $\ell_1$ minimization to finding low-rank matrices via $\ell_1$ minimization of the (vector of) singular values. The purpose of this paper is to justify mathematically why nuclear norm minimization does so well in practice. In addition to extending the restricted isometry property to matrices, the authors also discuss algorithms, computational performance, and numerical issues. This is a fascinating and well-written paper that combines results from compressed sensing, matrix theory, optimization, and probability.