Robust pointwise second-order necessary conditions for singular stochastic optimal control with model uncertainty
Robust pointwise second-order necessary conditions for singular stochastic optimal control with model uncertainty
- Research Article
15
- 10.1137/17m1148773
- Jan 1, 2018
- SIAM Review
The main purpose of this paper is to present some of our recent results about the second-order necessary conditions for stochastic optimal controls with the control variable entering into both the drift and the diffusion terms. In particular, when the control region is convex, a pointwise second-order necessary condition for stochastic singular optimal controls in the classical sense is established, whereas when the control region is allowed to be nonconvex, we obtain a pointwise second-order necessary condition for stochastic singular optimal controls in the sense of the Pontryagin-type maximum principle. Unlike deterministic optimal control problems or stochastic optimal control problems with control-independent diffusions, there exist some essential difficulties in deriving the pointwise second-order necessary optimality conditions from the integral conditions when the controls act in the diffusion terms of the stochastic control systems. Some techniques from Malliavin calculus are employed to overcome these difficulties. Moreover, it is found that, in contrast to the first-order necessary conditions, the correction part of the solution to the second-order adjoint equation appears in the pointwise second-order necessary conditions whenever the diffusion term depends on the control variable, even if the control region is convex.
- Research Article
4
- 10.1007/s11425-015-5080-7
- Oct 26, 2015
- Science China Mathematics
The purpose of this paper is to derive some pointwise second-order necessary conditions for stochastic optimal controls in the general case that the control variable enters into both the drift and the diffusion terms. When the control region is convex, a pointwise second-order necessary condition for stochastic singular optimal controls in the classical sense is established; while when the control region is allowed to be nonconvex, we obtain a pointwise second-order necessary condition for stochastic singular optimal controls in the sense of Pontryagin-type maximum principle. It is found that, quite different from the first-order necessary conditions, the correction part of the solution to the second-order adjoint equation appears in the pointwise second-order necessary conditions whenever the diffusion term depends on the control variable, even if the control region is convex.
- Research Article
47
- 10.1137/14098627x
- Jan 1, 2015
- SIAM Journal on Control and Optimization
This paper is the first part of our series of work to establish pointwise second-order necessary conditions for stochastic optimal controls. In this part, both drift and diffusion terms may contain the control variable but the control region is assumed to be convex. Under some assumptions in terms of the Malliavin calculus, we establish the desired necessary conditions for stochastic singular optimal controls in the classical sense.
- Research Article
19
- 10.1137/15m1045478
- Jan 1, 2017
- SIAM Journal on Control and Optimization
This paper is the second part of our series of work to establish pointwise second-order necessary conditions for stochastic optimal controls. In this part, we consider the general cases, i.e., the control region is allowed to be nonconvex, and the control variable enters into both the drift and the diffusion terms of the control systems. By introducing four variational equations and four adjoint equations (which are quite different from the case of convex control constraint), we obtain the desired necessary conditions for stochastic singular optimal controls in the sense of the Pontryagin-type maximum principle.
- Single Book
71
- 10.1016/s0076-5392(08)x6179-x
- Jan 1, 1975
Singular Optimal Control Problems
- Research Article
- 10.1109/tac.1978.1101683
- Feb 1, 1978
- IEEE Transactions on Automatic Control
For the last 30 years the optimization of nonsingular control problems has been an Important part of control engineering, and its mathematical theory is well developed and widely known. On the other hand, singular control problems prove more difficult to analyse and—although necessary conditions for optimality of singular controls have been established over the past decade—It is only recently that sufficient, and necessary and sufficient, conditions have been formulated. The purpose of this book Is to collect together all known results in optimal control theory (as well as appropriate computational methods) which can be applied to the singular problems In optimal control and which up to now have been scattered In numerous journals. Complete and self-contained, the volume begins with an historical survey of singular control problems and leads to the presentation of important, recent results in the field. There are specific real-world applications and the authors discuss those avenues of research which require further Investigation. All those involved In the optimization of dynamical systems will welcome the publication of this book. In addition to advanced students, lecturers and research workers in universities, this will include practising mechanical, chemical and electrical engineers, builders, textile technologists, paper scientists and chemists, and many concerned with non-technical fields such as economics and business management Contents An historical survey of singular control problems Introduction. Singular control in space navigation. Method of Mlele via Green's theorem. Linear systems—quadratic cost Necessary conditions for singular optimal control. Sufficient conditions and necessary and sufficient conditions for optimality. References. Fundamental concepts Introduction. The general optimal control problem. The first variation of J. The second variation of J. A singular control problem. References. Necessary conditions for singular optimal control Introduction. The generalized Legendre-Clebsch condition. Jacobson's necessary condition. References. Sufficient conditions and necessary and sufficient conditions tor non-negativity of nonsingular and singular second variations Introduction. Preliminaries. The nonsingular case. Strong positivlty and the totally singular second variation. A general sufficiency theorem for the second variation. Necessary and sufficient conditions for non-negativity of the totally singular second variation. Necessary conditions for optimality. Other necessary and sufficient conditions. Sufficient conditions for a weak local minimum. Existence conditions for the matrix Rlccati differential equation. Conclusion. References. Computational methods for singular control problems Introduction. Computational application of the sufficiency conditions of theorems in the previous chapter. Computation of optimal singular controls. Joining of optimal singular and non-singular sub-arcs. Conclusion. References. Conclusion The Importance of singular optimal control problems. Necessary conditions. Necessary and sufficient conditions. Computational methods. Switching conditions. Outlook for the future Author index. Sublect index.
- Research Article
69
- 10.1137/s0363012901385769
- Jan 1, 2002
- SIAM Journal on Control and Optimization
This work is concerned with the maximum principles for optimal control problems governed by 3-dimensional Navier--Stokes equations. Some types of state constraints (time variables) are considered.
- Research Article
34
- 10.1007/s10957-013-0361-1
- Jun 27, 2013
- Journal of Optimization Theory and Applications
Near-optimization is as sensible and important as optimization for both theory and applications. This paper deals with necessary and sufficient conditions for near-optimal singular stochastic controls for nonlinear controlled stochastic differential equations of mean-field type, which is also called McKean–Vlasov-type equations. The proof of our main result is based on Ekeland’s variational principle and some estimates of the state and adjoint processes. It is shown that optimal singular control may fail to exist even in simple cases, while near-optimal singular controls always exist. This justifies the use of near-optimal stochastic controls, which exist under minimal hypotheses and are sufficient in most practical cases. Moreover, since there are many near-optimal singular controls, it is possible to select among them appropriate ones that are easier for analysis and implementation. Under an additional assumptions, we prove that the near-maximum condition on the Hamiltonian function is a sufficient condition for near-optimality. This paper extends the results obtained in (Zhou, X.Y.: SIAM J. Control Optim. 36(3), 929–947, 1998) to a class of singular stochastic control problems involving stochastic differential equations of mean-field type. An example is given to illustrate the theoretical results.
- Research Article
82
- 10.1109/70.313107
- Jan 1, 1994
- IEEE Transactions on Robotics and Automation
This paper presents a general necessary condition for singular time optimal control of robotic manipulation moving along specified paths. Early work by Bobrow-Dubowsky (1985) and Shin-McKay (1985) ignored the issue of singular control, assuming bang-bang acceleration along the path. Recent work by Shiller-Lu (1992) has shown that the time optimal control can be singular if one of the equations of motion reduces to a velocity constraint. This paper derives a more general necessary condition for singular control. It is also proven that singular control cannot exist if the set of admissible controls is strictly convex, as is demonstrated for a two-link planar manipulator with elliptical actuator constraints.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">></ETX>
- Research Article
4
- 10.1002/mma.8373
- May 16, 2022
- Mathematical Methods in the Applied Sciences
In this paper, we study partially observed optimal stochastic singular control problems of general Mckean–Vlasov type with correlated noises between the system and the observation. The control variable has two components, the first being absolutely continuous and the second is a bounded variation, nondecreasing continuous on the right with left limits. The dynamic system is governed by Itô‐type controlled stochastic differential equation. The coefficients of the dynamic depend on the state process and of its probability law and the continuous control variable. In terms of a classical convex variational techniques, we establish a set of necessary conditions of optimal singular control in the form of maximum principle. Our main result is proved by applying Girsanov's theorem and the derivatives with respect to probability law in Lions' sense. To illustrate our theoretical result, we study partially observed linear‐quadratic singular control problem of McKean–Vlasov type.
- Research Article
28
- 10.1007/s40304-014-0023-0
- Dec 1, 2013
- Communications in Mathematics and Statistics
This paper studies singular optimal control problems for systems described by nonlinear-controlled stochastic differential equations of mean-field type (MFSDEs in short), in which the coefficients depend on the state of the solution process as well as of its expected value. Moreover, the cost functional is also of mean-field type. The control variable has two components, the first being absolutely continuous and the second singular. We establish necessary as well as sufficient conditions for optimal singular stochastic control where the system evolves according to MFSDEs. These conditions of optimality differs from the classical one in the sense that here the adjoint equation turns out to be a linear mean-field backward stochastic differential equation. The proof of our result is based on convex perturbation method of a given optimal control. The control domain is assumed to be convex. A linear quadratic stochastic optimal control problem of mean-field type is discussed as an illustrated example.
- Research Article
15
- 10.1007/s40435-014-0080-y
- Mar 25, 2014
- International Journal of Dynamics and Control
In this paper, we study a class of singular stochastic optimal control problems for systems described by mean-field forward-backward stochastic differential equations, in which the coefficient depend not only on the state process but also its marginal law of the state process through its expected value. Moreover, the cost functional is also of mean-field type. The control variable has two components, the first being absolutely continuous and the second singular control. Necessary conditions for optimal control for this systems in the form of a Pontrygin maximum principle are established by means convex perturbation techniques for both continuous and singular parts. Our stochastic maximum principle differs from the classical one in the sense that here the adjoint equation has a mean-field type. The control domain is assumed to be convex. As an illustration of our results, we consider a mean-variance portfolio selection mixed with a recursive utility functional optimization problem involving singular control. The explicit expression of the optimal portfolio selection strategy is obtained in the state feedback form involving both state process and its marginal distribution, via the solutions of Riccati ordinary differential equations with time-inconsistent solution.
- Research Article
34
- 10.1016/j.jde.2016.11.041
- Dec 6, 2016
- Journal of Differential Equations
First and second order necessary conditions for stochastic optimal controls
- Research Article
3
- 10.1134/s0081543819010024
- Jan 1, 2019
- Proceedings of the Steklov Institute of Mathematics
Using generalized needle variations, we derive second-order necessary optimality conditions for a strong minimum in an optimal control problem with endpoint constraints of general form.
- Research Article
5
- 10.1016/j.aml.2007.09.013
- Nov 5, 2007
- Applied Mathematics Letters
A new derivation of second-order conditions for equality control constraints
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.