We study the inverse problem of parameter identification in general saddle point problems. For saddle point problems, the use of elliptic regularization is an essential component. Saddle point problems, after discretization, lead to a non-invertible system, whereas the regularized saddle point problems result in an invertible system. Regularization methods, in the context of saddle point problems, have also been used to mitigate the role of the Inf-Sup condition, synonymously, also called the Babuska-Brezzi condition. This work aims to analyze the impact of regularizing the saddle point problem on the inverse problem. We investigate the inverse problem by using the output least-squares objective. To exploit the use of regularization fully, we work under the assumption that the solution map is nonempty. We regularize the saddle point problem and consider a family of optimization problems using the output least-squares objective for the regularized saddle point problem where some noise contaminates the whole data set. We give a complete convergence analysis showing that the optimization problems, given for the regularized output least-squares, approximate the original problem suitably. We also provide the first-order and the second-order adjoint method for the computation of the first-order and the second-order derivatives of the output least-squares objective. We present some heuristic numerical results. In the context of the elasticity imaging inverse problem, we conduct detailed numerical experiments on synthetic data (to study the role of the regularization parameter) as well as on phantom data.