Abstract

The present paper deals with impulse control. Unlike other papers, here the controls are chosen in the class of functions that admit first-order impulses (delta functions) as well as finitely many higher derivatives of these functions (generalized impulses or higher-order impulses). In addition, the control is sought in the form of positional strategies rather than open-loop solutions. The latter leads to the use of a modified version of dynamic programming theory, which is adjusted for such problems and based on the reduction of the original problem to another one problem considered in the class of only first-order impulses. In this modification, instead of the Hamilton–Jacobi–Bellman (HJB) equation, one uses variational inequalities of similar structure. However, solutions in the class of higher-order distributions do not necessarily admit physical realization. In order to make such solutions applicable, we suggest physically realizable approximations which converge to the exact solutions. Thus, in the class of higher-order distributions, it becomes possible to bring a controllable linear system from one given state to another in zero time. Then the physical realization of such a solution permits one to solve the same problem in an arbitrarily small finite time, which leads to the notion of physically realizable “fast” controls. We also indicate the possibility of numerically solving the problem on the construction of reachability domains of linear systems in the considered class of controls with higher-order impulses through methods of ellipsoidal calculus. This can be achieved on the basis of the comparison principle for HJB-type equations and inequalities. 1. THE PROBLEM Consider the control system described by the differential equation ˙ x(t )= A(t)x(t )+ B(t)u(t), (1) where x(t) ∈ R n is the phase variable and u(t) ∈ R m is the control. The matrix functions A(t) ∈ R n×n and B(t) ∈ R n×m are given and k times differentiable on the interval α ≤ t ≤ β. The controls u(t) are chosen in the class D ∗ [α, β] of linear continuous functionals on the normed linear space Dk,m[α, β] [1, 2], which consists of k times differentiable functions φ(t ):[ α, β] → R m supported in the closed interval [α, β] and is equipped with the norm � [φ ]= max t∈[α,β] γ � γ 0(φ(t)) ,γ 1 (φ � (t)) ,...,γ k � φ (k) (t) �� , where γk and γ are finite-dimensional norms in the spaces R m and R k+1 . The norm � [φ] defines the conjugate norm � ∗ [u ]i n the spaceD ∗ [α, β]. Therefore, the control is a distribution of order ku ≤ k. In addition, the trajectories of system (1) are distributions lying in D ∗−1,n[α, β]. Admissible controls u(t) are defined as distributions lying in D ∗[α, β] for which there exists a distribution x(t) ∈ D ∗ k−1,n [α, β] satisfying the equation ˙ x(t )= A(t)x + B(t)u + f (α) − f (β) and supported in the closed interval [tα ,t β], where α<t α ≤ tβ <β .H ere f (α) and f (β) are distributions in D ∗ k,n[α, β] supported at the points tα and tβ, respectively. These distributions

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call