An algorithm is proposed for the minimization of a smooth function subject to smooth inequality constraints. Unlike sequential quadratic programming type methods, this algorithm does not involve the solution of quadratic programs, but merely that of linear systems of equations. Locally the iteration can be viewed as a perturbation of a quasi-Newton iteration on both the primal and dual variables for the solution of the equalities in the Kuhn-Tucker first order conditions of optimality. It is observed that, provided the current iterate is feasible and the current multiplier estimates are strictly positive, the primal component of the quasi-Newton direction is a direction of descent for the objective function. This fact is used to induce global convergence, without the need of a surrogate merit function. A careful “bending” of the search direction prevents any Maratos-like effect, and local superlinear convergence is proven. While the algorithm requires that an initial feasible point be available, the succe...