Abstract

Games have always been a popular domain of AI research, and they have been used for many recent competitions. Reaching human‐level performance, however, often either focuses on comprehensive world knowledge or solving decision‐making problems with unmanageable solution spaces. Building on the popular Taboo board game, the Taboo Challenge Competition addresses a different problem — that of bridging the gap between the domain knowledge of heterogeneous agents trying to jointly identify a concept without making reference to its most salient features. The competition, which was run for the first time at the 2017 IJCAI conference, aims to provide a simple testbed for diversity‐aware AI where the focus is on integrating independently engineered AI components, while offering a scenario that is challenging enough to test the concept, yet simple enough not to require mastering general commonsense knowledge or natural language understanding. We describe the design of and preparation for the competition, and discuss the results and lessons learned.

Highlights

  • I Games have always been a popular domain of AI research, and they have been used for many recent competitions

  • In the Taboo board game, one agent guesses a concept that another agent describes without the use of taboo words that would make the concept too easy to guess

  • Achieving human-level performance at Taboo requires significant commonsense reasoning capabilities, but is limited to guessing or describing a target concept. It does not require a comprehensive knowledge of the world or a deep understanding of natural language, as, for example, the Winograd Schema Challenge does (Levesque 2011)

Read more

Summary

The Taboo Challenge Competition

I Games have always been a popular domain of AI research, and they have been used for many recent competitions. Achieving human-level performance at Taboo requires significant commonsense reasoning capabilities, but is limited to guessing or describing a target concept It does not require a comprehensive knowledge of the world or a deep understanding of natural language, as, for example, the Winograd Schema Challenge does (Levesque 2011). The game is interactive, which means that it requires agents to respond based on previous steps in the dialogue, rather than just identifying a correct solution from among several choices, as in Jeopardy, the Winograd Schema Challenge, or standardized academic tests (Clark and Etzioni 2016) This aspect of the game offers opportunities to develop diversityaware AI methods, as participants submitting agent implementations to the competition have to face teammates who have been independently developed and who will have internal semantic processing and interactive decision-making strategies unknown to the agent. It allows for the comparison both between different AI approaches and between AI solutions and human performance

The Competition
Results
Training Dataset
Lessons Learned
Future Plans
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call