Abstract

We study an efficient algorithm to solve the distributionally robust optimization (DRO) problem, which has recently attracted attention as a new paradigm for decision making in uncertain situations. In traditional stochastic programming, a decision is sought that minimizes the expected cost over the probability distribution of the unknown parameters. In contrast, in DRO, robust decision making can be derived from data without assuming a probability distribution; thus, it is expected to provide a powerful method for data-driven decision making. However, it is computationally difficult to solve the DRO problem and even by state-of-art solvers the problem size that can be solved to optimality is still limited. Therefore, we propose an efficient algorithm for solving DRO based on consensus optimization (CO). CO is a distributed algorithm in which a large-scale problem is decomposed into smaller subproblems. Because different local solutions are obtained by solving subproblems, a consensus constraint is imposed to ensure that these solutions are equal, thereby guaranteeing global convergence. We applied the proposed method to linear programming, quadratic programming, and second-order cone programming in numerical experiments and verified its effectiveness.

Highlights

  • T HE effects of uncertainty in decision making continue to increase in the dynamic and volatile business environment

  • The computation time of ECOS, SCS and CDRO are shown, where ECOS and SCS refer to the distributionally robust counterparts (DRCs) of problem (35) that was solved by each solver directly, and CDRO refers to the problem optimized by the proposed method

  • In the case of (n, m) = (200, 300), ECOS was faster until N ≤ 103, and CDRO was faster at N = 104

Read more

Summary

INTRODUCTION

T HE effects of uncertainty in decision making continue to increase in the dynamic and volatile business environment. DRO 0.03 0.05 1.69 157 N/A a different decision, a consensus constraint is imposed that ensures that these decisions agree This CO method is inferior to second-order algorithms such as the interiorpoint method in terms of the convergence rate, it has been established that the problem can be solved rapidly if high accuracy is not required. In the second-order method such as interior-point method, the number of iterations is small because the approximation accuracy is high, but a computation time of at least O((n + m + N )3) is required to solve the approximation problem, where n is the number of decision variables, m is the number of constraints, and N is the sample size. Ben-Tal et al [21] proposed a systematic means of constructing the robust counterpart of a nonlinear uncertain inequality that was concave in the uncertain parameters, using support functions, conjugate functions, and Fenchel duality

DISTRIBUTIONALLY ROBUST OPTIMIZATION
The φdivergence is defined as
FORMULATION
PROPOSED ALGORITHM
NUMERICAL EXAMPLES
KL DIVERGENCE
2) Results
CONCLUSIONS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.