Abstract

A matrix game is played repeatedly, with the actions taken at each stage determining both a reward paid to Player I and the probability of continuing to the next stage. An infinite history of play determines a sequence (Rn ) of such rewards, to which we assign the payoff lim supn (R 1 + · ·· + Rn ). Using the concept of playable strategies, we slightly generalize the definition of the value of a game. Then we find sufficient conditions for the existence of a value and for the existence of stationary optimal strategies (Theorems 8 and 9). An example shows that the game need not have a value (Example 4).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.