Abstract

In this paper, we consider the problem of selecting the most efficient optimization algorithm for neural network approximation—solving optimal control problems with mixed constraints. The original optimal control problem is reduced to a finite-dimensional optimization problem by applying the necessary optimality conditions, the Lagrange multiplier method and the least squares method. Neural network approximation models are presented for the desired control functions, trajectory and conjugate factors. The selection of the optimal weight coefficients of the neural network approximation was carried out using the gravitational search algorithm and the basic particle swarm algorithm and the genetic algorithm. Computational experiments showed that evolutionary optimization algorithms required the smallest number of iterations for a given accuracy in comparison with the classical gradient optimization method; however, the multi-agent optimization methods were performed later for each operation. As a result, the genetic algorithm showed a faster convergence rate relative to the total execution time.

Highlights

  • There are many computer modeling problems that require reduction to the class of optimal control problems (OCP) to automate the process of finding a solution, as well as to reduce the complexity of calculations

  • Based on the optimal Karush–Kuhn–Tucker conditions, the authors constructed a function for calculating the error and formulated a nonlinear optimization problem, in which neural network approximations are defined for the state function, control, and Lagrange multipliers

  • This paper describes the structure of a neural network solution that satisfies the Kreinovich theorem and corresponds to a hole-layer perceptron, and presents a general scheme for optimizing its parameters

Read more

Summary

Introduction

There are many computer modeling problems that require reduction to the class of optimal control problems (OCP) to automate the process of finding a solution, as well as to reduce the complexity of calculations. The most common tool for solving such problems is numerical methods, including the apparatus of artificial neural networks (ANN). The use of other numerical optimization methods requires the subsequent interpolation of the discrete solution, which, in turn, imposes an additional error. The efficiency of using the neural network approach for solving the OCP is expressed in the possibility of obtaining a solution satisfying the necessary optimality conditions, and smoothness conditions. Based on the optimal Karush–Kuhn–Tucker conditions, the authors constructed a function for calculating the error and formulated a nonlinear optimization problem, in which neural network approximations are defined for the state function, control, and Lagrange multipliers. For the obtained scheme of dynamic optimization of the weight coefficients of the neural network solution, the analysis of stability and convergence is carried out

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call