Abstract

Nonlinear optimization problems with dynamical parameters are widely arising in many practical scientific and engineering applications, and various computational models are presented for solving them under the hypothesis of short-time invariance. To eliminate the large lagging error in the solution of the inherently dynamic nonlinear optimization problem, the only way is to estimate the future unknown information by using the present and previous data during the solving process, which is termed the future dynamic nonlinear optimization (FDNO) problem. In this paper, to suppress noises and improve the accuracy in solving FDNO problems, a novel noise-tolerant neural (NTN) algorithm based on zeroing neural dynamics is proposed and investigated. In addition, for reducing algorithm complexity, the quasi-Newton Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is employed to eliminate the intensively computational burden for matrix inversion, termed NTN-BFGS algorithm. Moreover, theoretical analyses are conducted, which show that the proposed algorithms are able to globally converge to a tiny error bound with or without the pollution of noises. Finally, numerical experiments are conducted to validate the superiority of the proposed NTN and NTN-BFGS algorithms for the online solution of FDNO problems.

Highlights

  • IntroductionIt is worth pointing out that a large number of practical problems are dynamic in nature, of which the parameters involved are varying with time, thereby leading to a time-dependent theoretical solution

  • T O date, due to the important role that the nonlinear optimization problem plays in various areas [1]–[11], many numerical methods and neural dynamics have been developed and extended to solve it, among which, gradient

  • We show how the theoretical analyses of noise-tolerant neural (NTN) algorithm (13)and NTN-BFGS algorithm (14) are substantiated by experimental results

Read more

Summary

Introduction

It is worth pointing out that a large number of practical problems are dynamic in nature, of which the parameters involved are varying with time, thereby leading to a time-dependent theoretical solution. When solved by these traditional algorithms, a dynamic optimization problem is assumed to be time-invariant during the computational interval and the generated solution is directly employed to the problem at the time instant. For a time-dependent problem aided with a traditional model, large lagging error is unavoidable

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call