Abstract

We present a method that acquires a state transition graph (STG) from input/output sequences (training sequences) of an unknown finite state machine (FSM). Our method is based on the genetic network programming (GNP) framework. Here, STGs as individuals are evolved by applying genetic operations such as crossover and mutation. The goal of the method is acquisition of an STG that is consistent with training sequences, and the number of states is as small as possible. Next, we modify our method such that the method acquires rules for an agent's decision making. In the modified method, a directed graph is used to represent rules, where nodes indicate situations that an agent is placed in, and edges represent state transitions. Each edge has two sets of information - percepts and actions. An agent first refers to the initial node, and an edge is adopted according to its percepts. The agent does the actions associated with the edges and the next state is decided. Directed graphs are used as individuals and genetic operations are applied to them to obtain good rules. These methods have been implemented and some experimental results are shown.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call