Abstract

When solving a convex optimization problem through a Lagrangian dual reformulation subgradient optimization methods are favorably utilized, since they often find near-optimal dual solutions quickly. However, an optimal primal solution is generally not obtained directly through such a subgradient approach unless the Lagrangian dual function is differentiable at an optimal solution. We construct a sequence of convex combinations of primal subproblem solutions, a so called ergodic sequence, which is shown to converge to an optimal primal solution when the convexity weights are appropriately chosen. We generalize previous convergence results from linear to convex optimization and present a new set of rules for constructing the convexity weights that define the ergodic sequence of primal solutions. In contrast to previously proposed rules, they exploit more information from later subproblem solutions than from earlier ones. We evaluate the proposed rules on a set of nonlinear multicommodity flow problems and demonstrate that they clearly outperform the ones previously proposed.

Highlights

  • Introduction and motivationLagrangian relaxation is a frequently utilized tool for solving large-scale convex minimization problems due to its simplicity and its property of systematically providing optimistic estimates on the optimal value

  • As the dual iterates in a subgradient scheme converge towards an optimal dual solution, primal convergence towards a near-optimal primal solution is not in general achieved by using the subproblem solutions as primal iterates

  • We evaluate the new rules on a set of nonlinear multicommodity flow problems (NMFPs) and show that they clearly outperform the previously utilized ones

Read more

Summary

Introduction and motivation

Lagrangian relaxation is a frequently utilized tool for solving large-scale convex minimization problems due to its simplicity and its property of systematically providing optimistic estimates on the optimal value. To guarantee primal convergence for a linear program in a subgradient scheme, Shor [44, Chapter 4] and Larsson and Liu [27] (originally developed in [26]) utilize a strategy which, rather than using the subproblem solution as primal iterate, uses a convex combination of previously found subproblem solutions, denoted as an ergodic sequence. Nedicand Ozdaglar [32,33] study methods which utilize the average of all previously found iterates as primal solutions The latter algorithms employ a constant step length due to its simplicity and practical significance. We present a new set of rules for constructing the convexity weights defining the ergodic sequence of primal iterates. Computational results for a set of NMFP test instances employing the new rules for choosing the convexity weights are presented in Sect.

Background
Subgradient optimization
Ergodic primal convergence
Feasibility in the limit
Optimality in the limit
Connection with previous results
Applications to multicommodity network flows
The nonlinear multicommodity network flow problem
A Lagrangian dual formulation
The algorithm
Implementation issues
Test problems
Convexity weight rules
Results
Conclusions and future research
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call