Abstract
For inequality constrained optimization problem, we first propose a new smoothing method to the lower order exact penalty function, and then show that an approximate global solution of the original problem can be obtained by solving a global solution of a smooth lower order exact penalty problem. We propose an algorithm based on the smoothed lower order exact penalty function. The global convergence of the algorithm is proved under some mild conditions. Some numerical experiments show the efficiency of the proposed method.
Highlights
Consider the following inequality constrained optimization problem: min f0(x) (P)s.t. fi(x) ≤ 0, i ∈ I = {1, 2, . . . , m}, where fi : Rn → R, i = 0, 1, . . . , m, are twice continuously differentiable functions
Throughout this paper, we use X0 = {x ∈ Rn|fi(x) ≤ 0, i ∈ I} to denote the feasible solution set. This problem is widely applied in transportation, economics, mathematical programming, regional science, etc. [1,2,3], and it has received extensive attention on a related problem, for example, variational inequalities, equilibrium problem, minimizers of convex functions, etc
The least exact penalty parameter corresponding to k ∈ (0, 1) is much less than that of the l1 exact penalty function
Summary
Consider the following inequality constrained optimization problem: min f0(x) (P). s.t. fi(x) ≤ 0, i ∈ I = {1, 2, . . . , m}, where fi : Rn → R, i = 0, 1, . . . , m, are twice continuously differentiable functions. The corresponding penalty optimization problem is as follows: min x∈Rn. The non-smoothness of the function restricts the application of a gradient-type or Newton-type algorithm to solving problem (P1). In order to avoid this shortcoming, the smoothing of the l1 exact penalty function is proposed in [17, 18]. Wu et al [20] proposed the following low order penalty function:. Proved that the low order penalty function is exact under mild conditions. The least exact penalty parameter corresponding to k ∈ (0, 1) is much less than that of the l1 exact penalty function This can avoid the defects of too large parameter ρ in the algorithm
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.