Sparse recovery (SR) has received considerable attention in the last decades. Various iterative algorithms have been proposed to solve the SR problem, but most of them are unsatisfactory in terms of convergence accuracy and speed. Recently, deep unfolding methods achieve a dramatic improvement in convergence by unfolding iterative SR algorithms into deep neural networks and learning their parameters from training data, e.g., Learning Proximal Operator Method (LePOM) and Analytical LePOM (ALePOM). However, according to theoretical analysis, the recovery error bound of ALePOM should be further reduced as its network parameters are learned to fit all training data. To solve this problem, we design a new network named Improved ALePOM (I-ALePOM) by introducing a Long Short-Term Memory (LSTM) cell into each layer of ALePOM, which helps to adaptively compute the thresholds and step-sizes for each training data. By doing so, the recovery error bound becomes tighter and higher sparse recovery performance can be achieved. Furthermore, we extend the proposed I-ALePOM from the 1-D vector form to the 2-D matrix form, reducing the memory and computation burdens for practical sparse matrix recovery applications. Various numerical simulations verify that the proposed method has a higher performance than the state-of-the-art methods in the literature.