Abstract

Environment of multi-agent systems is often very complex. Therefore it is sometimes difficult, or even impossible, to specify and implement all system details a priori. Application of machine learning algorithms allows to overcome this problem. One can implement an agent that is not perfect, but improves its performance. There are many learning methods that can be used to generate knowledge or strategy in a multi-agent system. Choosing an appropriate one, which fits a given problem, can be a difficult task. The aim of the research presented here was to test applicability of reinforcement learning and supervised rule learning strategies in the same problem. Reinforcement learning is the most common technique in multi-agent systems. It allows to generate a strategy for an agent in a situation, when the environment provides some feedback after the agent has acted. Symbolic, supervised learning is not so widely used in multi-agent systems. There are many methods belonging to this class that generate knowledge from data. Here a rule induction algorithm is used. It generates a rule-based classifier, which assigns a class to a given example. As an input it needs examples, where the class is assigned by some teacher. We show how observation of other agents’ actions can be used instead of the teacher. As an environment the Fish Banks game is used. It is a simulation, in which agents run fishing companies and its main task is to decide how many ships send for fishing, and where to send them. Four types of agents are created. Reinforcement learning agent and supervised learning agent improve their allocation performance using appropriate learning strategy. As a reference two additional types of agents are introduced: random agent, which chooses allocation action randomly, and predicting agent, which assumes that fishing results will be the same as in previous round, and allocates ships using this simple prediction. In the next section related research on learning in multi-agent systems is briefly presented. The third section explains details of the environment, architecture and behaviours of the agents. Next, results of several experiments, which were performed to compare mentioned learning methods, are presented and discussed. Results show that both of them give good results. However; both of them have some advantages and disadvantages. In the last two sections conclusions and further research are presented. O pe n A cc es s D at ab as e w w w .in te ch w eb .o rg

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call