The problem of minimizing a function f(x):RJ → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function to obtain xk. The auxiliary functions gk(x):D ⊆ RJ → R+ are nonnegative on the set D, each xk is assumed to lie within D, and the objective is to minimize the continuous function f:RJ → R over x in the set , the closure of D. We assume that such minimizers exist, and denote one such by . We assume that the functions gk(x) satisfy the inequalities for k = 2, 3, …. Using this assumption, we show that the sequence {f(xk)} is decreasing and converges to . If the restriction of f(x) to D has bounded level sets, which happens if is unique and f(x) is closed, proper and convex, then the sequence {xk} is bounded, and , for any cluster point x*. Therefore, if is unique, and . When is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton–Raphson method. The proof techniques used for SUMMA can be extended to obtain related results for the induced proximal distance method of Auslander and Teboulle.