Abstract

We study a penalty reformulation of constrained convex optimization based on the softplus penalty function. For strongly convex objectives, we develop upper bounds on the objective value gap and the violation of constraints for the solutions to the penalty reformulations by analyzing the solution path of the reformulation with respect to the smoothness parameter. We use these upper bounds to analyze the complexity of applying gradient methods, which are advantageous when the number of constraints is large, to the reformulation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call