Abstract

The problem of optimal control over a finite time interval for a mathematical model of a three-sector economic cluster is posed. The economic system is reduced by means of transformations to the optimal control problem for one class of nonlinear systems with coefficients depending on the state of the control object. Two optimal control problems for one class of nonlinear systems with and without control constraints are considered. The nonlinear objective functional in these problems depends on the control and state of the object. Then, using the results of solving optimal control problems on a finite interval, an algorithm for solving the problem for a nonlinear system of a three-sector economic cluster is developed. A nonlinear control based on the feedback principle using Lagrange multipliers of a special kind is found. The results obtained for nonlinear systems are used to construct the control parameters of a mathematical model of a three-sector economic cluster at a finite time interval with a given functional and various initial conditions. The results of the system state calculation are shown in the figures, the optimal controls satisfy the given constraints. The optimal distribution of labor and investment resources for a three-sector economic cluster is determined. They ensure that the system is brought into an equilibrium state and satisfy balance ratios. These results are useful for practice and are important because there are a number of optimal control problems when it is necessary to transfer a system from an initial state to a desired final state for a given time interval. Such problems often arise for an economic system when a certain level of development is required.

Highlights

  • IntroductionThe researches proposed in this paper belong to one of the promising and rapidly developing areas of mathematical control theory in recent years

  • The problem of optimal control for dynamic systems can be formulated as the problem of finding a program control or constructing a synthesizing control depending on the system state and the current time moment

  • There are a number of optimal control problems where it is necessary to move a system from an initial state to a desired final state over a specified time interval

Read more

Summary

Introduction

The researches proposed in this paper belong to one of the promising and rapidly developing areas of mathematical control theory in recent years. The relevance of such research is reasoned by the fact that control tasks are encountered in almost all areas of human activity: these are complex technical systems and technological processes. In these systems, there are issues of achieving the goal by selecting the optimum control actions, taking into account various constraints (system trajectory requirements, control constraints). The problem of optimal control for dynamic systems can be formulated as the problem of finding a program control or constructing a synthesizing control depending on the system state and the current time moment. The Pontryagin maximum principle gives necessary optimality conditions and allows us to obtain the program control depending on the current moment of time. It should be noted that it is difficult to apply these methods directly to obtain the optimal control law

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call