Abstract

Advances in artificial intelligence create new opportunities for computers to support humans as peers in hybrid teams in several complex problem-solving situations. This paper proposes a decision-making architecture for adaptively informing decisions in human-computer collaboration for large-scale competitive problems under dynamic environments. The proposed architecture integrates methods from sequence learning, model predictive control, and game theory. Computers in this architecture learn objectives and strategies from experimental data to support humans with strategic decisions while operational decisions are made by humans. The paper also presents data-driven methods for partitioning tasks among a team of computers in this architecture. The generalized methodology is illustrated on the real-time strategy game Starcraft II. The results from this application show that low-performing players can benefit from the game-theoretic decision support whereas this support can be overly conservative for high-performing players. The proposed approach provides safe though suboptimal suggestions particularly against an opponent with an unknown level of expertise. The results further show that problem solution with a team of computers based on non-intuitive task partitioning significantly improves the quality of decisions compared to an all-in-one solution with a single computer.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call