Abstract
AbstractThis chapter deals with completely observable stochastic control problems for diffusion processes, described by SDEs. The decision maker chooses an optimal decision among all possible ones to achieve the goal. Namely, for a control process, its response evolves according to a (controlled) SDE and the payoff on a finite time interval is given. The controller wants to minimize (or maximize) the payoff by choosing an appropriate control process from among all possible ones. Here we consider three types of control processes: 1. \((\mathcal{F}_{t})\)-progressively measurable processes. 2. Brownian-adapted processes. 3. Feedback controls. In order to analyze the problems, we mainly use the dynamic programming principle (DPP) for the value function.The reminder of this chapter is organized as follows. Section 2.1 presents the formulation of control problems and basic properties of value functions, as preliminaries for later sections. Section 2.2 focuses on DPP. Although DPP is known as a two stage optimization method, we will formulate DPP by using a semigroup and characterize the value function via the semigroup. In Sect. 2.3, we deal with verification theorems, which give recipes for finding optimal Markovian policies. Section 2.4 considers a class of Merton-type optimal investment models, as an application of previous results.KeywordsOptimal Markovian PoliciesDynamic Programming Principle (DPP)Verification TheoremStochastic Control ProblemStage Optimization MethodThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have