AbstractWe describe a second‐order accurate approach to sparsifying the off‐diagonal matrix blocks in the hierarchical approximate factorization methods for solving sparse linear systems. These methods repeatedly sparsify the fill‐in matrix blocks that arise in block Gaussian elimination, to compute an approximate factorization of the given matrix, assuming that the fill‐in blocks are low‐rank. The factorization is then used for preconditioning in a Krylov subspace method, such as the conjugate gradient method (CG), BiCGSTAB or GMRES. However, to achieve fast convergence on ill‐conditioned systems, sparsifications can only introduce a small error, in which case they may inefficiently restore sparsity, and consequently the factorization can take a lot of time to compute. In the novel approach to sparsification, the 2‐norm of the incurred error in the sparsification of a matrix block is squared compared to previous approaches, with no additional computations. We incorporate the new approach into the recent sparsified nested dissection algorithm and test it on a wide range of symmetric positive definite problems. The new approach halves the number of CG iterations needed for convergence, significantly improving the overall performance of the algorithm. Our approach can be incorporated into other solvers that exploit the low‐rank property of matrix blocks.