Abstract
This paper presents a new global optimization algorithm for solving a class of linear multiplicative programming (LMP) problem. First, a new linear relaxation technique is proposed. Then, to improve the convergence speed of our algorithm, two pruning techniques are presented. Finally, a branch and bound algorithm is developed for solving the LMP problem. The convergence of this algorithm is proved, and some experiments are reported to illustrate the feasibility and efficiency of this algorithm.
Highlights
Consider the following linear multiplicative programming (LMP) problem: LMP: pV = min φ (x) = ∑ i=1 (1)s.t. x ∈ D = {x ∈ Rn | Ax ≤ b}, where p ≥ Rn, di, fi ∈ b =m×1 2, R, ∈ ciT i= Rm= 1, is, eiT = a=nd(aDij)m⊆×nR∈n (Reim1,×eni2i,s.. . , ein) ∈ a matrix, is nonempty and bounded.As a special case of nonconvex programming problem, the problem LMP has been paid more attention since the
The purpose of this paper is to present an effective method for globally solving problem LMP
A lower bound of LMP problem and its partitioned subproblems can be obtained by solving a linear relaxation programming problem
Summary
In the past few decades, for all x ∈ D, under the assumption that ciT + di > 0, eiTx + fi > 0, a number of practical algorithms have been proposed for globally solving problem LMP. A lower bound of LMP problem and its partitioned subproblems can be obtained by solving a linear relaxation programming problem. For generating this linear relaxation, the strategy proposed by this paper is to underestimate the objective function φ(x) with a linear function. Based on the above discussion, the linear relaxation programming (LRP) problem can be established as follows, which provides a lower bound for the optimal value of LMP problem over H: LRP: min φl (x) s.t. Ax ≤ b,. Theorem 1 implies that φl(x) and φu(x) will approximate the function φ(x) as Δx → 0
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have