Abstract

The paper presents a characterization of equilibrium in a game-theoretic description of discounting conditional stochastic linear-quadratic (LQ for short) optimal control problem, in which the controlled state process evolves according to a multidimensional linear stochastic differential equation, when the noise is driven by a Poisson process and an independent Brownian motion under the effect of a Markovian regime-switching. The running and the terminal costs in the objective functional are explicitly dependent on several quadratic terms of the conditional expectation of the state process as well as on a nonexponential discount function, which create the time-inconsistency of the considered model. Open-loop Nash equilibrium controls are described through some necessary and sufficient equilibrium conditions. A state feedback equilibrium strategy is achieved via certain differential-difference system of ODEs. As an application, we study an investment–consumption and equilibrium reinsurance/new business strategies for mean-variance utility for insurers when the risk aversion is a function of current wealth level. The financial market consists of one riskless asset and one risky asset whose price process is modeled by geometric Lévy processes and the surplus of the insurers is assumed to follow a jump-diffusion model, where the values of parameters change according to continuous-time Markov chain. A numerical example is provided to demonstrate the efficacy of theoretical results.

Highlights

  • For usual optimal control problems, by the dynamic principle of optimality [40] one may check that an optimal control remains optimal when it is restricted to a later time interval, meaning that optimal controls are time-consistent

  • The dynamic principle of optimality consists in establishing relationships among a family of timeconsistent optimal control problems parameterized by initial pairs through the so-called Hamilton–Jacobi–Bellman equation (HJB), which is a nonlinear partial differential equation

  • Zeng and Li [43] are the first who study Nash equilibrium strategies for mean-variance insurers with constant risk aversion, where the surplus process of insurers is described by the diffusion model and the price processes of the risky stocks are driven by geometric Brownian motions

Read more

Summary

Introduction

For usual optimal control problems, by the dynamic principle of optimality [40] one may check that an optimal control remains optimal when it is restricted to a later time interval, meaning that optimal controls are time-consistent. Ekeland and Pirvu [11] gave a formal definition of feedback Nash equilibrium controls in a continuous-time setting in order to investigate the optimal investment–consumption problem under general discount functions in both deterministic and stochastic frameworks. Zeng and Li [43] are the first who study Nash equilibrium strategies for mean-variance insurers with constant risk aversion, where the surplus process of insurers is described by the diffusion model and the price processes of the risky stocks are driven by geometric Brownian motions. Mean-variance asset-liability management problems with a continuous-time Markov regime-switching setup have been studied by Wei et al [34] They explicitly deduced a time-consistent investment strategy using the method described in [3]. The paper concludes with an Appendix that includes some proofs

Problem setting
Notations
Assumptions and problem formulation
The main results: characterization and uniqueness of equilibrium
Linear feedback stochastic equilibrium control
Uniqueness of the equilibrium control
Applications
Conditional mean-variance investment and reinsurance strategies
Classical Cramér–Lundberg model
The investment only
Conclusion
Proofs and technical results
Existence and uniqueness of solutions to SDE and BSDE
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call