Abstract

We develop a solution method for American put options that directly employs the policy iteration principle of dynamic programming. The method iteratively improves exercise policies, obtains monotonically increasing value functions and converges quadratically under reasonable assumptions. We present a numerical implementation that exhibits these features. The same principle is also applied to obtain a monotonically improving policy iteration scheme for general free boundary optimal control problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call