Abstract

Nonlinear constrained optimization problems in discrete and continuous spaces are an important class of problems studied extensively in artificial intelligence and operations research. These problems can be solved by a Lagrange-multiplier method in continuous space and by an extended discrete Lagrange-multiplier method in discrete space. When constraints are satisfied, these methods rely on gradient descents in the objective space to find high-quality solutions. On the other hand, when constraints are violated, these methods rely on gradient ascents in the Lagrange-multiplier space in order to increase the penalties on unsatisfied constraints and to force the constraints into satisfaction. The balance between gradient descents and gradient ascents depends on the relative weights between the objective function and the constraints, which indirectly control the convergence speed and solution quality of the Lagrangian method. To improve convergence speed without degrading solution quality, we propose an algorithm to dynamically control the relative weights between the objective and the constraints. Starting from an initial weight, the algorithm automatically adjusts the weights based on the behavior of the search progress. With this strategy, we are able to eliminate divergence, reduce oscillation, and speed up convergence. We show improved convergence behavior of our proposed algorithm on both nonlinear continuous and discrete problems.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.