Abstract

Predictive state representation (PSR) is a compact model of dynamic systems that represents state as a vector of predictions about future observable events. It is an alternative to a partially observable Markov decision process (POMDP) model in dealing with a sequential decision-making problem under uncertainty. Most of the existing PSR research focuses on the model learning in a single-agent setting. In this paper, we investigate a multi-agent PSR model upon available agents interaction data. It turns out to be rather difficult to learn a multi-agent PSR model especially with limited samples and increasing number of agents. We resort to a tensor technique to better represent dynamic system characteristics and address the challenging task of learning multi-agent PSR problems based on tensor optimization. We first focus on a two-agent scenario and use a third order tensor (system dynamics tensor) to capture the system interaction data. Then, the PSR model discovery can be formulated as a tensor optimization problem with group lasso, and an alternating direction method of multipliers is called for solving the embedded subproblems. Hence, the prediction parameters and state vectors can be directly learned from the optimization solutions, and the transition parameters can be derived via a linear regression. Subsequently, we generalize the tensor learning approach in a multi(N>2)-agent PSR model, and analyze the computational complexity of the learning algorithms. Experimental results show that the tensor optimization approaches have provided promising performances on learning a multi-agent PSR model over multiple problem domains.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call