Abstract

This paper treats a class of finite-stage dynamic programming (DP) problems whose state space is an interval in the one-dimensional Euclidean space R1. By introducing a notion of “reward space” we can define, in this class, an inverse problem (called the inverse DP) to a given DP (called the main DP), because of the monotonicity and continuity of the reward functions and state transformations. Roughly speaking the inverse DP to the main DP is defined by replacing “max”, “reward functions” and “state transformations” with “min”, “inverse (in some sense) to the state transformations” and “inverse (in some sense) to the reward functions”, respectively, by replacing “terminal reward function” with “inverse to itself”, and by alternating “reward spaces” with “state spaces”, but by letting “action spaces” remain as they are. According to the monotonicity of the reward functions and state transformations we clarify the relation between the main DP and the inverse DP separately for four cases. Our main result is stated as follows. The pair of optimal reward functions and optimal policy for the main DP characterizes, in an inverse sense, the pair of optimal reward functions and optimal policy for the inverse DP, and vice versa (Section 2). Further we show that our DP represents a mathematical programming problem which has the property “recursiveness with monotonicity” in [5] and conversely such a problem can also be represented by our DP (Section 3). The last section is devoted to the illustration of several examples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call