Abstract
In the literature, the problem of maximizing the expected discounted reward over all stopping rules has been explicitly solved for a number of reward functions (including (maxfx;0g) , > 0, in particular) when the underlying process is either a random walk in discrete time or a Levy process in continuous time. All of such reward functions are increasing and logconcave while the corresponding optimal stopping rules have the threshold form. In this paper, we explore the close connection between increasing and logconcave reward functions and optimal stopping rules of threshold form. In the discrete case, we show that if a reward function defined on Z is nonnegative, increasing and logconcave, then the optimal stopping rule is of threshold form provided the underlying random walk is skip-free to the right. In the continuous case, it is shown that for a reward function defined on R which is nonnegative, increasing, logconcave and right-continuous, the optimal stopping rule is of threshold form provided the underlying process is a spectrally negative Levy process. Furthermore, we also establish the necessity of logconcavity and monotonicity of a reward function in order for the optimal stopping rule to be of threshold form in the discrete (continuous, resp.) case when the underlying process belongs to the class of Bernoulli random walks (Brownian motions, resp.) with a downward drift. These results together provide a partial characterization of the threshold structure of optimal stopping rules.
Highlights
Let X = {Xt}t≥0 be a process with stationary independent increments defined on a probability space (Ω, F, P ) where the time parameter t is either discrete (i.e. t ∈ Z+ = {0, 1, . . . }) or continuous (i.e. t ∈ R+ = [0, ∞))
We have shown in Theorem 3.1 that for a nonnegative, increasing, logconcave and right-continuous reward function g, the optimal stopping rule is of threshold form with respect to a general spectrally negative Lévy process
We have explored the close connection between increasing logconcave reward functions and optimal stopping rules of threshold form, which yields a partial characterization of the threshold structure of optimal stopping rules
Summary
We consider γ = 0 (no future discount) and show in Section 4 (Section 5, resp.) that a nonnegative reward function defined on Z (R, resp.) is necessarily increasing and logconcave if the corresponding optimal stopping rule is of threshold form for all Bernoulli random walks (Brownian motions, resp.) with a downward drift.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.