Abstract

In this paper, we consider a class of structured optimization problems whose objective function is the summation of two convex functions: f and h, which are not necessarily differentiable. We focus particularly on the case where the function f is general and its exact first-order information (function value and subgradient) may be difficult to obtain, while the function h is relatively simple. We propose a generalized alternating linearization bundle method for solving this class of problems, which can handle inexact first-order information of on-demand accuracy. The inexact information can be very general, which covers various oracles, such as inexact, partially inexact and asymptotically exact oracles, and so forth. At each iteration, the algorithm solves two interrelated subproblems: one aims to find the proximal point of the polyhedron model of f plus the linearization of h; the other aims to find the proximal point of the linearization of f plus h. We establish global convergence of the algorithm under different types of inexactness. Finally, some preliminary numerical results on a set of two-stage stochastic linear programming problems show that our method is very encouraging.

Highlights

  • In this paper, we consider the following structured convex optimization problemF∗ := minn { F ( x ) := f ( x ) + h( x )}, x ∈R (1)where f : dom h → R and h : Rn → (−∞, ∞] are closed proper convex functions, but not necessarily differentiable, and dom h := { x : h( x ) < ∞} is the effective domain of h

  • We can take a subgradient of φue at x as an approximate subgradient of f at x. Another example is two-stage stochastic programming, in which the function value is generated after solving a series of linear programs, its accuracy is determined by the tolerance of the linear programming solver

  • Aiming at the special structure of problem (1), we present a slight variant of the oracles with on-demand accuracy proposed in Reference [23] as follows: for a given x ∈ Rn, a descent target γx and an error bound ε x ≥ 0, the approximate values f x, gx and Fx satisfy the following condition

Read more

Summary

Introduction

Where f : dom h → R and h : Rn → (−∞, ∞] are closed proper convex functions, but not necessarily differentiable, and dom h := { x : h( x ) < ∞} is the effective domain of h. If f has the form f ( x ) := sup{φu ( x ) : u ∈ U }, where U is an infinite set and φu ( x ) : Rn → R is convex for any u ∈ U, it is often difficult to calculate the exact function value f ( x ). We can take a subgradient of φue at x as an approximate subgradient of f at x Another example is two-stage stochastic programming (see, e.g., References [23,24]), in which the function value is generated after solving a series of linear programs (details will be given in the section of numerical experiments), its accuracy is determined by the tolerance of the linear programming solver. The Euclidean inner product in Rn is denoted by h x, yi := x T y, and the associated norm by k · k

Preliminaries
The Generalized Alternating Linearization Bundle Method
Global Convergence
Numerical Experiments
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call