Abstract

Individual autonomous agents can employ adaptive Artificial intelligence. The individual agent can respond fast, near real-time, to the environment and avoid hurdles and collisions while keeping a certain distance. Although the systems learn to handle the environment with new conditions and situations, it does not, per see, guarantee that the system is robust meaning the system performs predictably while its variables and assumptions are altered. Nor does it mean that adaptive individual autonomous agents in multi-agent systems perform well in dynamic, distributed and partially observable environments where unexpected things happen.This paper presents an approach to applying robustness analysis on adaptive Artificial intelligence in multi-autonomous agent systems. Adaptiveness modifies the agents’ learning by generating appropriate responses to new situations to achieve resilience to perturbations. The robustness analysis explores the quality of these responses so the agent can continuously operate despite abnormalities in input and consequently safely tolerate perturbations. A multi-agent prototype simulating the integration of adaptive agents with robustness analysis shows that it is possible to apply robustness analysis to the responses of adaptive Artificial intelligence. In the system, each autonomous agent collects data about the surrounding environment and applies adaptation to provide options, i.e., possible actions and decisions. The robustness analysis examines the options in order to validate and adjust them as needed. With these updated options, the agents retrain to resist malfunctioning and achieve resilience and robustness for given situations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call