Constrained absorbing continuous-time stochastic games
Constrained absorbing continuous-time stochastic games
- Research Article
1
- 10.1016/j.tcs.2018.10.009
- Oct 11, 2018
- Theoretical Computer Science
A uniformization-based algorithm for continuous-time stochastic games model checking
- Single Report
14
- 10.3386/t0304
- Jan 1, 2005
Continuous-time stochastic games with a finite number of states have substantial computational and conceptual advantages over the more common discrete-time model. In particular, continuous time avoids a curse of dimensionality and speeds up computations by orders of magnitude in games with more than a few state variables. The continuous-time approach opens the way to analyze more complex and realistic stochastic games than is feasible in discrete-time models.
- Conference Article
12
- 10.4230/lipics.fsttcs.2009.2307
- Jan 1, 2009
We study continuous-time stochastic games with time-bounded reachability objectives. We show that each vertex in such a game has a \emph{value} (i.e., an equilibrium probability), and we classify the conditions under which optimal strategies exist. Finally, we show how to compute optimal strategies in finite uniform games, and how to compute $\varepsilon$-optimal strategies in finitely-branching games with bounded rates (for finite games, we provide detailed complexity estimations).
- Research Article
9
- 10.2139/ssrn.3235368
- Jan 1, 2018
- SSRN Electronic Journal
I study how the persistence of past choices can be used to create incentives in a continuous time stochastic game in which a large player, such as a i¬ rm, interacts with a sequence of short-run players, such as customers. The long-run player faces moral hazard and her past actions are imperfectly observed – they are distorted by a Brownian motion. Persistence refers to the fact that actions impact a payoi¬€relevant state variable, e.g. the quality of a product depends on both current and past investment choices. I obtain a characterization of actions and payoi¬€s in Markov Perfect Equilibria (MPE), for a i¬ xed discount rate. I show that the perfect public equilibrium (PPE) payoi¬€ set is the convex hull of the MPE payoi¬€ set. Finally, I derive sui¬ƒcient conditions for a MPE to be the unique PPE. Persistence creates ei¬€ective intertemporal incentives to overcome moral hazard in settings where traditional channels fail. Several applications illustrate how the structure of persistence impacts the strength of these incentives.
- Research Article
- 10.2139/ssrn.3695184
- Nov 17, 2020
- SSRN Electronic Journal
We analyze the interaction between firms' payout policies and their decisions in product markets in a continuous-time stochastic game between two firms. One of these is financially constrained, whereas the other is not. Contrary to the standard literature we allow firms to choose production and payout strategies, and focus on the effect of predation incentives on both. We find that predation induces fewer dividend payouts. Furthermore, the liquidity position of the constrained firm has an economically significant effect on the production choices of both firms and, thus, on the evolution of profits, cash holdings and stock returns.
- Research Article
4
- 10.2139/ssrn.2889017
- Jan 14, 2017
- SSRN Electronic Journal
This paper studies how persistence can be used to create incentives in a continuous-time stochastic game in which a long-run player interacts with a sequence of short-run players. Observation of the long-run player's actions are distorted by a Brownian motion and the actions of both players impact future payoffs through a state variable. For example, a firm or worker provides customers with a product, and the quality of this product depends on both current and past investment choices by the rm. I derive general conditions under which a Markov equilibrium emerges as the unique perfect public equilibrium, and characterize the equilibrium payoff and actions in this equilibrium, for any discount rate. I develop an application of persistent product quality to illustrate how persistence creates effective intertemporal incentives in a setting where traditional channels fail, and explore how the structure of persistence impacts equilibrium behavior. This demonstrates the power of the continuous-time setting to deliver sharp insights and a tractable equilibrium characterization for a rich class of dynamic games.
- Research Article
14
- 10.1007/s13235-012-0067-2
- Dec 19, 2012
- Dynamic Games and Applications
We study nonzero-sum continuous-time stochastic games, also known as continuous-time Markov games, of fixed duration. We concentrate on Markovian strategies. We show by way of example that equilibria need not exist in Markovian strategies, but they always exist in Markovian public-signal correlated strategies. To do so, we develop criteria for a strategy profile to be an equilibrium via differential inclusions, both directly and also by modeling continuous-time stochastic as differential games and using the Hamilton–Jacobi–Bellman equations. We also give an interpretation of equilibria in mixed strategies in continuous time and show that approximate equilibria always exist.
- Research Article
14
- 10.1016/j.ic.2013.01.001
- Jan 15, 2013
- Information and Computation
Continuous-time stochastic games with time-bounded reachability
- Research Article
29
- 10.1016/j.geb.2017.02.004
- Mar 22, 2017
- Games and Economic Behavior
Continuous-time stochastic games
- Research Article
1
- 10.35634/vm210402
- Dec 1, 2021
- Vestnik Udmurtskogo Universiteta. Matematika. Mekhanika. Komp'yuternye Nauki
The paper is concerned with the approximation of the value function of the zero-sum differential game with the minimal cost, i.e., the differential game with the payoff functional determined by the minimization of some quantity along the trajectory by the solutions of continuous-time stochastic games with the stopping governed by one player. Notice that the value function of the auxiliary continuous-time stochastic game is described by the Isaacs–Bellman equation with additional inequality constraints. The Isaacs–Bellman equation is a parabolic PDE for the case of stochastic differential game and it takes a form of system of ODEs for the case of continuous-time Markov game. The approximation developed in the paper is based on the concept of the stochastic guide first proposed by Krasovskii and Kotelnikova.
- Research Article
50
- 10.2139/ssrn.658242
- Mar 14, 2012
- SSRN Electronic Journal
Discrete-time stochastic games with a finite number of states have been widely applied to study the strategic interactions among forward-looking players in dynamic environments. However, these games suffer from a curse of dimensionality since the cost of computing players' expectations over all possible future states increases exponentially in the number of state variables. We explore the alternative of continuous-time stochastic games with a finite number of states, and show that continuous time has substantial computational and conceptual advantages. Most important, continuous time avoids the curse of dimensionality, thereby speeding up the computations by orders of magnitude in games with more than a few state variables. Overall, the continuous-time approach opens the way to analyze more complex and realistic stochastic games than currently feasible.
- Research Article
83
- 10.3982/qe153
- Mar 1, 2012
- Quantitative Economics
Discrete-time stochastic games with a finite number of states have been widely applied to study the strategic interactions among forward-looking players in dynamic environments. These games suffer from a "curse of dimensionality" when the cost of computing players' expectations over all possible future states increases exponentially in the number of state variables. We explore the alternative of continuous-time stochastic games with a finite number of states and argue that continuous time may have substantial advantages. In particular, under widely used laws of motion, continuous time avoids the curse of dimensionality in computing expectations, thereby speeding up the computations by orders of magnitude in games with more than a few state variables. This much smaller computational burden greatly extends the range and richness of applications of stochastic games.
- Research Article
65
- 10.1137/1130036
- Jun 1, 1986
- Theory of Probability & Its Applications
Semi-Markov and Jump Markov Controlled Models: Average Cost Criterion
- Research Article
12
- 10.2139/ssrn.2505129
- Jan 1, 2014
- SSRN Electronic Journal
This paper studies a class of continuous-time stochastic games in which the actions of a long-run player have a persistent effect on payoffs. For example, the quality of a firm's product depends on past as well as current effort, or the level of a policy instrument depends on a government's past choices. The long-run player faces a population of small players, and its actions are imperfectly observed. I establish the existence of Markov equilibria, characterize the Perfect Public Equilibria (PPE) pay-offset as the convex hull of the Markov Equilibria payoff set, and identify conditions for the uniqueness of a Markov equilibrium in the class of all PPE. The existence proof is constructive: it characterizes the explicit form of Markov equilibria payoffs and actions, for any discount rate. Action persistence creates a crucial new channel to generate intertemporal incentives in a setting where traditional channels fail, and can provide permanent non-trivial incentives in many settings. These results offer a novel framework for thinking about reputational dynamics of firms, governments, and other long-run agents.
- Research Article
3
- 10.1007/s00245-022-09878-9
- Jun 7, 2022
- Applied Mathematics & Optimization
We study nonzero-sum stochastic games for continuous time Markov decision processes on a denumerable state space with risk-sensitive ergodic cost criterion. Transition rates and cost rates are allowed to be unbounded. Under a Lyapunov type stability assumption, we show that the corresponding system of coupled HJB equations admits a solution which leads to the existence of a Nash equilibrium in stationary strategies. We establish this using an approach involving principal eigenvalues associated with the HJB equations. Furthermore, exploiting appropriate stochastic representation of principal eigenfunctions, we completely characterize Nash equilibria in the space of stationary Markov strategies.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.