Abstract

The mean value cross decomposition method for linear programming problems is a modification of ordinary cross decomposition that eliminates the need for using the Benders or Dantzig-Wolfe master problem. It is a generalization of the Brown-Robinson method for a finite matrix game and can also be considered as a generalization of the Kornai-Liptak method. It is based on the subproblem phase in cross decomposition, where we iterate between the dual subproblem and the primal subproblem. As input to the dual subproblem we use the average of a part of all dual solutions of the primal subproblem, and as input to the primal subproblem we use the average of a part of all primal solutions of the dual subproblem. In this paper we give a new proof of convergence for this procedure. Previously convergence has only been shown for the application to a special separable case (which covers the Kornai-Liptak method), by showing equivalence to the Brown-Robinson method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call