Abstract

A matrix game is played repeatedly, with the actions taken at each stage determining both a reward paid to Player I and the probability of continuing to the next stage. An infinite history of play determines a sequence (Rn ) of such rewards, to which we assign the payoff lim supn (R 1 + · ·· + Rn ). Using the concept of playable strategies, we slightly generalize the definition of the value of a game. Then we find sufficient conditions for the existence of a value and for the existence of stationary optimal strategies (Theorems 8 and 9). An example shows that the game need not have a value (Example 4).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call