Uncertain stochastic hybrid systems and zero-sum games: saddle-point solution and application to counterterrorism
An uncertain stochastic dynamical system is a dynamical system incorporating both uncertain noise and random noise. This paper investigates two-person zero-sum games (TPZSGs) for uncertain stochastic discrete-time as well as continuous-time dynamical systems. The first work is to present recursive formulations for tackling a two-person zero-sum game (TPZSG) subject to uncertain stochastic discrete-time dynamical systems in terms of chance theory with Bellman's optimality principle. Then, the recursive formulations have been successfully implemented to treat the TPZSGs subject to linear, bilinear, and nonlinear uncertain stochastic discrete-time dynamical systems. Subsequently, optimality equations for a TPZSG subject to uncertain stochastic continuous-time dynamical systems are developed. The continuous-time TPZSG may be treated by the acquired optimality equations. For illustration, the optimality equations are utilized to solve a counterterrorism model that covers a government and a terrorist group. The equilibrium controls for the government and terrorist group, as well as their connections with the states of income and resources, are examined. These findings demonstrate that the uncertain stochastic TPZSG is an efficient technology for addressing a dynamic game considering both uncertain noise and random noise.
- Research Article
11
- 10.1016/j.amc.2021.126337
- May 19, 2021
- Applied Mathematics and Computation
Optimal control for uncertain stochastic dynamic systems with jump and application to an advertising model
- Research Article
13
- 10.1177/1077546309103282
- May 19, 2010
- Journal of Vibration and Control
A minimax stochastic optimal control strategy for bounded-uncertain stochastic systems is proposed. The minimax dynamical programming equation for an uncertain stochastic control system is firstly derived based on the optimality principle and Itô differential rule. A new type of bang-bang constraint on the bounded uncertain disturbance is proposed to form a class of minimax stochastic optimal control problems. Then the worst disturbance and minimax optimal control are obtained for the bang-bang-type uncertain system under stochastic excitations. According to this method, the quasi linear control law is obtained for linear stochastic systems with bounded uncertainty and the state-dependent quasi Riccati equation is derived from the minimax dynamical programming equation. Furthermore, a minimax stochastic optimal control strategy for uncertain stochastic quasi Hamiltonian systems is developed based on the stochastic averaging method and minimax dynamical programming equation. The worst disturbance and minimax optimal control for the stochastically averaged system are obtained by the similar procedure. The proposed and developed minimax stochastic optimal control strategies are illustrated with an example of a single-degree-of-freedom uncertain stochastic control system.
- Research Article
3
- 10.1080/00207721.2023.2208133
- May 3, 2023
- International Journal of Systems Science
Uncertainty theory is a field in axiomatic mathematics committed to disposing of belief degrees. By dint of uncertain theory and Hurwicz criterion, this article mainly addresses optimal control and non-zero-sum differential game of uncertain delay dynamic systems, which are depicted as a sort of uncertain differential equation with multiple input delays. Employing the technology of dynamic programming, the optimality principle is put forward and the optimality equation is formulated simultaneously to deal with the optimal control problem. In addition, an equilibrium equation is derived to solve the Nash equilibrium for the multi-player non-zero-sum uncertain differential game on the strength of the proposed optimality equation. An example is devised to illustrate the availability of the results in the end.
- Research Article
- 10.1093/imamat/59.3.261
- Dec 1, 1997
- IMA Journal of Applied Mathematics
In the present article, the similarity method is formulated to stochastic dynamical systems described by stochastic differential equations of Stratonovich type. When a stochastic dynamical system admits symmetries, it follows that the order of stochastic equations describing the system can be reduced. Here a symmetry means an one-parameter continuous transformation which leaves the stochastic system invariant. Several examples are given, in which a nonlinear stochastic system, stochastic Hamiltonian dynamical systems related to the harmonic oscillator, and a stochastic version of the neo-classical optimal growth model suggested by Samuelson are contained.
- Research Article
5
- 10.1063/1.532036
- Jun 1, 1997
- Journal of Mathematical Physics
This paper deals with uncertain dynamical systems in which predictions about the future state of a system are assessed by so called pseudomeasures. Two special cases are stochastic dynamical systems, where the pseudomeasure is the conventional probability measure, and fuzzy dynamical systems in which the pseudomeasure is a so called possibility measure. New results about possibilistic systems and their relation to deterministic and to stochastic systems are derived by using idempotent pseudolinear algebra. By expressing large deviation estimates for stochastic perturbations in terms of possibility measures, we obtain a new interpretation of the Freidlin-Wentzell quasipotentials for stochastic perturbations of dynamical systems as invariant possibility densities.
- Research Article
27
- 10.1103/physreve.61.2490
- Mar 1, 2000
- Physical Review E
The dynamics of transitions between the cells of a finite phase-space partition is analyzed for deterministic and stochastic dynamical systems in continuous time. Special emphasis is placed on the dependence of mean recurrence time on the resolution \ensuremath{\tau} between successive observations, in the limit $\stackrel{\ensuremath{\rightarrow}}{\ensuremath{\tau}}0.$ In deterministic systems the limit is found to exist, and to depend on only the intrinsic parameters of the underlying dynamics. In stochastic systems two different cases are identified, leading to a \ensuremath{\tau}-independent behavior and a ${\ensuremath{\tau}}^{1/2}$ behavior, depending on whether a finite speed of propagation of the signals exists or not. An extension of the results to the second moment of the recurrence time is outlined.
- Research Article
34
- 10.1002/rnc.1600
- Jan 11, 2011
- International Journal of Robust and Nonlinear Control
In this paper, the problem of delay‐dependent stability for uncertain stochastic dynamic systems with time‐varying delay is considered. Based on the Lyapunov stability theory, improved delay‐dependent stability criteria for the system are established in terms of linear matrix inequalities. Three numerical examples are given to show the effectiveness of the proposed method. Copyright © 2010 John Wiley & Sons, Ltd.
- Research Article
- 10.21656/1000-0887.410323
- Jan 1, 2021
- Applied Mathematics and Mechanics
A class of nonlinear stochastic integro-differential dynamical systems were discussed, the necessary and sufficient conditions for the mean-square asymptotic stability of the zero solution to the system were given by means of the Banach fixed point method, and a mean-square asymptotic stability theorem for neutral Volterra stochastic integro-differential dynamical systems with multiple delays was established. Unlike the previous research methods, according to the characteristics of each time delay of the stochastic dynamical system with multiple time delays, the operators were constructed through introduction of the corresponding functions, and then the stability of the system was studied with the Banach fixed point method. The conclusion improves and develops the results of several related research papers to a certain extent. In addition, the results obtained supplement and extend those of the fixed point method in study of the stability of zero solutions to nonlinear neutral variable-delay Volterra stochastic integro-differential dynamical systems.
- Research Article
8
- 10.1016/j.cnsns.2008.07.014
- Aug 6, 2008
- Communications in Nonlinear Science and Numerical Simulation
Finite dimensional Markov process approximation for stochastic time-delayed dynamical systems
- Research Article
- 10.4028/www.scientific.net/amm.631-632.688
- Sep 12, 2014
- Applied Mechanics and Materials
Stochastic system widely exists in natural science, engineering and social system, and the study of stochastic system has become one of important research contents for engineering researchers. Aiming at a kind of stochastic system, i.e., nonlinear and uncertain stochastic systems, we present many widespread theoretical and applied problems in this paper, and summarize and review the main conclusions and ideas of literature related to nonlinear and uncertain stochastic systems. Further, we give some new research topics and directions. We try to provide new methods for the control study of nonlinear and uncertain stochastic systems.
- Research Article
19
- 10.1080/00207170802452096
- Jun 18, 2009
- International Journal of Control
This article presents the optimal quadratic-Gaussian controller for uncertain stochastic polynomial systems with unknown coefficients and matched deterministic disturbances over linear observations and a quadratic criterion. The optimal closed-form controller equations are obtained through the separation principle, whose applicability to the considered problem is substantiated. As intermediate results, this article gives closed-form solutions of the optimal regulator, controller and identifier problems for stochastic polynomial systems with linear control input and a quadratic criterion. The original problem for uncertain stochastic polynomial systems with matched deterministic disturbances is solved using the integral sliding mode algorithm. Performance of the obtained optimal controller is verified in the illustrative example against the conventional quadratic-Gaussian controller that is optimal for stochastic polynomial systems with known parameters and without deterministic disturbances. Simulation graphs demonstrating overall performance and computational accuracy of the designed optimal controller are included.
- Dissertation
- 10.7907/6j83-7c18.
- Jan 1, 2011
In order to accelerate computations and improve long time accuracy of numerical simulations, this thesis develops multiscale geometric integrators. For general multiscale stiff ODEs, SDEs, and PDEs, FLow AVeraging integratORs (FLAVORs) have been proposed for the coarse time-stepping without any identification of the slow or the fast variables. In the special case of deterministic and stochastic mechanical systems, symplectic, multisymplectic, and quasi-symplectic multiscale integrators are easily obtained using this strategy. For highly oscillatory mechanical systems (with quasi-quadratic stiff potentials and possibly high-dimensional), a specialized symplectic method has been devised to provide improved efficiency and accuracy. This method is based on the introduction of two highly nontrivial matrix exponentiation algorithms, which are generic, efficient, and symplectic (if the exact exponential is symplectic). For multiscale systems with Dirac-distributed fast processes, a family of symplectic, linearly-implicit and stable integrators has been designed for coarse step simulations. An application is the fast and accurate integration of constrained dynamics. In addition, if one cares about statistical properties of an ensemble of trajectories, but not the numerical accuracy of a single trajectory, we suggest tuning friction and annealing temperature in a Langevin process to accelerate its convergence. Other works include variational integration of circuits, efficient simulation of a nonlinear wave, and finding optimal transition pathways in stochastic dynamical systems (with a demonstration of mass effects in molecular dynamics).
- Dissertation
3
- 10.7907/c7w5-8c39.
- Jan 1, 1973
The recursive estimation of states or parameters of stochastic dynamical systems with partial and imperfect measurements is generally referred to as filtering. The estimator itself is called the filter. In this dissertation optimal filters are derived for three important classes of nonlinear stochastic dynamical systems. The first class of systems, considered in Chapter II, is that governed by stochastic nonlinear hyperbolic and parabolic partial differential equations in which the dynamical disturbances in the system and in the boundary conditions can be both additive and nonadditive. This class of systems is important for it encompasses a large group of systems of practical interest, such as chemical reactors and heat exchangers. The optimal filter obtained can estimate, not only the state, but also constant parameters appearing at the boundary and in the volume of the system. The computational application of this filter is illustrated in an example of the feedback control of a styrene polymerization reactor. Many physical systems contain time delays in one form or another. Often, this kind of delay system is accompanied by some other processes such as dissipation of mass and energy, fluid mixing, and chemical reaction. In Chapter III within a single framework new optimal filters are obtained for the following classes of stochastic systems: 1. Nonlinear lumped parameter systems containing multiple constant and time-varying delays; 2. Mixed nonlinear lumped and hyperbolic distributed parameter systems; and 3. Nonlinear lumped parameter systems with functional time delays. The performance of the filter is illustrated through estimates of the temperatures in a system consisting of a well-stirred chemical reactor and an external heat exchanger. In Chapter IV filtering equations are derived for a completely general class of stochastic systems governed by coupled nonlinear ordinary and partial differential equations of either first order hyperbolic or parabolic type with both volume and boundary random disturbances. Thus, the results of Chapter III can be shown to be a special case of those obtained in Chapter IV. A related important concept to filtering is observability. For deterministic linear lumped parameter systems, observability refers to the ability to recover some prior state of a dynamical system based on partial observations of the state over some period of time. Under certain conditions, observability of the corresponding deterministic system is a sufficient condition for convergence of the optimal linear filter for a linear system with white noise disturbances. In Chapter V the concept of observability and filter convergence is developed for a class of stochastic linear distributed parameter systems whose solutions can be expressed as eigenfunction expansions. Two important questions examined are: (1) the effect of measurement locations on observability, and (2) the optimal location of measurements for state estimation.
- Conference Article
1
- 10.1109/iaeac54830.2022.9930096
- Oct 3, 2022
The stability of stochastic dynamic systems have widespread application prospects and greater theoretical significance. Its theory are mainly applied to engineering control, communication equipment, military technology, biology, finance, etc. In this article, we consider the stability of a Class of nonlinear impulsive neutral stochastic differential dynamic systems. A new set of conditions proving the mean square stability of this nonlinear impulsive neutral stochastic dynamic systems are derived by means of the Banach fixed point theorem. Some well-known results are improved and generalized.
- Research Article
- 10.14288/1.0076161
- Jul 1, 2015
The SL-AVV approach to system level reliability-based design optimization of large uncertain and stochastic dynamic systems
- New
- Research Article
- 10.1080/02331934.2025.2566113
- Nov 5, 2025
- Optimization
- New
- Research Article
- 10.1080/02331934.2025.2579724
- Oct 31, 2025
- Optimization
- New
- Research Article
- 10.1080/02331934.2025.2577805
- Oct 28, 2025
- Optimization
- New
- Research Article
- 10.1080/02331934.2025.2577808
- Oct 28, 2025
- Optimization
- New
- Research Article
- 10.1080/02331934.2025.2578403
- Oct 28, 2025
- Optimization
- Research Article
- 10.1080/02331934.2025.2577807
- Oct 24, 2025
- Optimization
- Research Article
- 10.1080/02331934.2025.2574467
- Oct 24, 2025
- Optimization
- Research Article
- 10.1080/02331934.2025.2577413
- Oct 24, 2025
- Optimization
- Research Article
- 10.1080/02331934.2025.2573666
- Oct 16, 2025
- Optimization
- Research Article
- 10.1080/02331934.2025.2573668
- Oct 16, 2025
- Optimization
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.