Abstract

AI applications are increasingly moving to modular agents, i.e., systems that independently handle parts of the problem based on small locally stored information (Grosz and Davis 1994), (Russell and Norvig 1995). Many such agents minimize inter-agent communication by relying on changes in the environment as their cue for action. Some early successes of this model, especially in robotics (``reactive agents''), have led to a debate over this class of models as a whole. One of the issues on which attention has been drawn is that of conflicts between such agents. In this work we investigate a cyclic conflict that results in infinite looping between agents and has a severe debilitating effect on performance. We present some new results in the debate, and compare this problem with similar cyclicity observed in planning systems, meta-level planners, distributed agent models and hybrid reactive models. The main results of this work are: (a) The likelihood of such cycles developing increases as the behavior sets become more useful. (b) Control methods for avoiding cycles such as prioritization are unreliable, and (c) Behavior refinement methods that reliably avoid these conflicts (either by refining the stimulus, or by weakening the action) lead to weaker functionality. Finally, we show how attempts to introduce learning into the behavior modules will also increase the likelihood of cycles.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call