In this study, we focus on examining single-agent stochastic games, especially Markov reward games represented in the form of a decision tree. We propose an alternative solution method based on the matrix norms for these games. In contrast to the existing methods such as value iteration, policy iteration, and dynamic programming, which are state-and-action-based approaches, the proposed matrix norm-based method considers the relevant stages and their actions as a whole and solves it holistically for each stage without computing the effects of each action on each state's reward individually. The new method involves a distinct transformation of the decision tree into a payoff matrix for each stage and the utilization of the matrix norm of the obtained payoff matrix. Additionally, the concept of the moving matrix is integrated into the proposed method to incorporate the impacts of all actions on the stage simultaneously, rendering the method holistic. Moreover, we present an explanatory algorithm for the implementation of the method and also provide a comprehensive solution diagram explaining the method figuratively. As a result, we offer a new and alternative perspective for solving the games with the help of the proposed method due to the simplicity of utilization of the matrix norms in addition to the existing methods. For clarification of the matrix norm-based method, we demonstrate the figurative application of the method on a benchmark Markov reward game with 2-stages and 2-actions and a comprehensive implementation of the method on a game consisting of 3-stages and 3-actions.
Read full abstract