Abstract

We consider an investor faced with the utility maximization problem in which the risky asset price process has pure-jump dynamics affected by an unobservable continuous-time finite-state Markov chain, the intensity of which can also be controlled by actions of the investor. Using the classical filtering theory, we reduce this problem with partial information to one with full information and solve it for logarithmic and power utility functions. In particular, we apply control theory for piece-wise deterministic Markov processes (PDMP) to our problem and derive the optimality equation for the value function and characterize the value function as the unique viscosity solution of the associated dynamic programming equation. Finally, we provide a toy example, where the unobservable state process is driven by a two-state Markov chain, and discuss how investor's ability to control the intensity of the state process affects the optimal portfolio strategies as well as the optimal wealth under both partial and full information cases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call