Abstract

This article is devoted to studying dual regularization method applied to parametric convex optimal control problem of controlled third boundary–value problem for parabolic equation with boundary control and with equality and inequality pointwise state constraints. This dual regularization method yields the corresponding necessary and sufficient conditions for minimizing sequences, namely, the stable, with respect to perturbation of input data, sequential or, in other words, regularized Lagrange principle in nondifferential form and Pontryagin maximum principle for the original problem. Regardless of the fact that the stability or instability of the original optimal control problem, they stably generate a minimizing approximate solutions in the sense of J. Warga for it. For this reason, we can interpret these regularized Lagrange principle and Pontryagin maximum principle as tools for direct solving unstable optimal control problems and reducing to them unstable inverse problems.

Highlights

  • Pontryagin maximum principle is the central result of all optimal control theory, including optimal control for differential equations with partial derivatives

  • As a typical property of optimization problems in general, including constrained ones, instability fully manifests itself in optimal control problems

  • The above applies, in full measure, both to discussed below optimal control problem with pointwise state constraints for linear parabolic equation in divergent form, and to the classical optimality conditions in the form of the Lagrange principle and the Pontryagin maximum principle for this problem

Read more

Summary

Introduction

Pontryagin maximum principle is the central result of all optimal control theory, including optimal control for differential equations with partial derivatives. In this paper we discuss how to overcome the problem of instability of the classical optimality conditions in optimal control problems applying dual regularization method (see., e.g., [11,12,13]) and simultaneous transition to the concept of minimizing sequence of admissible elements as the main concept of optimization theory The latter role acts the concept of the minimizing approximate solution in the sense of J. Regardless of the stability or instability of the original optimal control problem, they stably generate minimizing approximate solutions for it For this reason, we can interpret the regularized Lagrange principle and Pontryagin maximum principle that are obtained in the article as tools for direct solving unstable optimal control problems and reducing to them unstable inverse problems [10,14,15,16].

Statement of optimal control problem
Basic concepts and auxiliary propositions
Stable sequential Pontryagin maximum principle
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call