In unconstrained maximum a posteriori (MAP) and maximum likelihood estimation, the inverse of minus the merit-function Hessian matrix is an approximation of the estimate covariance matrix. In the Bayesian context of MAP estimation, it is the covariance of a normal approximation of the posterior around the mode; while in maximum likelihood estimation, it is an approximation of the inverse Fisher information matrix, to which the covariance of efficient estimators converges. These measures are routinely used in system identification to evaluate the estimate uncertainties and diagnose problems such as overparameterization, improper excitation and unidentifiability. A wide variety of estimation problems in systems and control, however, can be formulated as equality-constrained optimizations with additional decision variables to exploit parallelism in computer hardware, simplify implementation and increase the convergence basin and efficiency of the nonlinear program solver. The introduction of the extra variables, however, dissociates the inverse Hessian from the covariance matrix. Instead, submatrices of the inverse Hessian of the constrained-problem’s Lagrangian must be used. In this paper, we derive these relationships, showing how the estimates’ covariance can be estimated directly from the augmented problem. Application examples are shown in system identification with the output-error method and joint state-path and parameter estimation.