Abstract

Recent research on Distributed Artificial Intelligence (DAI) has focused upon agents’ interaction in Multiagent Systems. This paper presents a text understanding oriented multiagent dynamic interaction testbed (TUMIT): the theoretic framework based upon game theory, the free-market-like system architecture, and experimentation on TUMIT. Unlike other DAI testbeds, TUMIT views different text understanding (TU) methods as different “computational resources”, and makes agents choose different TU paths and computational resources according to the resource information on the bulletins in their hostcomputer. Therefore, in TUMIT, task allocation is wholly distributed. This makes TUMIT work like a “free market”. In such a system, agents’ choices and resource load may oscillate. It is shown theoretically and experimentally that if agents use multi-level of “history information”, their behavior will tend to converge to a Nash equilibrium situation; and that if agents use “recall-forget” strategy on “history information”, the convergence can be accelerated and the agents can acclimate themselves to changed environment. Compared with other DAI testbeds, TUMIT is more distributed, and the agents in TUMIT are more adaptive to the dynamic environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call