Abstract

From its beginning, Artificial Intelligence (AI) has tried to synthesize intelligent behavior by reducing it to artifacts. Two families of approaches have, however, appeared [Zie97].On one hand, Knowledge-Based AI (also named Classical AI or Top-Down AI, closely related to the classical view of cognitive sciences) has developed methodologies for manipulating internal representation models of the world, using notions like knowledge or information. This trend has focused on building expert systems (see [FMN88] for an overview) that own and manage ‘knowledge’ in a sufficiently narrow problem domain. Thus, research in this field has concentrated on knowledge representation and knowledge engineering. Several systems have shown good performance in domains like medical diagnosis and theorem proving.On the other hand, Behavior-Based AI (also named Bottom-Up AI) considers interaction with an environment as an essential feature for intelligent behavior. Research in this area has studied systems that have ‘life’-like attributes, in particular autonomous agents that persist within an environment. Autonomous agents are therefore defined as embodied systems that are designed to fulfill internal goals by their own actions in continuous long-term interaction with the environment in which they are situated [Zie97], [Bee95]. Autonomous agents possess two essential properties: autonomy and embod- iment. Autonomy basically means that an agent acts on its own, without being driven by another agent (possibly a human): it has its own control over its actions and its internal state. Embodiment refers to the fact that an autonomous agent has a “body” that delineates it from its environment in which the agent is situated. It is on this environment that an agent senses and acts through its own specific means (sensors and effectors). The degrees of embodiment define several autonomous agent types: physical embodied agents have a physical body, sensors and effectors (they are robots); simulated embodied agents roam in a simulated physical environment; and software agents lack a body but persist and interact in a complex software environment.Interest has rapidly shifted from the single agent case towards the multiagent case, in which several agents share a common environment and interact with one another. Thismeans that an agent participates in a society of agents, a so-called multi-agent system. These systems result therefore from the or- ganization of multiple agents within an environment, whereby the state of each agent individually changes as a result of its sensing and its own behavior rules. As no global control is applied to the participating agents, every agent has a subjective view of the evolution of the world. Multi-agent systems are therefore interested in an interaction-centered perspective. Dealing with interactions leads naturally to the concept of emergence of behavior, which essentially means that a system’s behavior can only be specified using descriptive categories which are not to be used to describe the behavior of the constituent components (the agents) [For90].KeywordsAutonomous AgentTarget ClassCoordination ModelSubjective CoordinationIntelligent BehaviorThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.