Abstract
We propose a continuous-time version of the adaptive robust methodology introduced in Bielecki et al. (2019). An agent solves a stochastic control problem where the underlying uncertainty follows a jump-diffusion process and the agent does not know the drift parameters of the process. The agent considers a set of alternative measures to make the control problem robust to model misspecification and employs a continuous-time estimator to learn the value of the unknown parameters to make the control problem adaptive to the arrival of new information. We use measurable selection theorems to prove the dynamic programming principle of the adaptive robust problem and show that the value function of the agent is characterised by a non-linear partial differential equation. As an example, we derive in closed-form the optimal adaptive robust strategy for an agent who acquires a large number of shares in an order-driven market and illustrates the financial performance of the execution strategy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.