Abstract

In this article, a new self-organizing fuzzy neural network (FNN) model is presented which is able to simultaneously and accurately learn and reproduce different sequences. Multiple sequence learning is important in performing habitual and skillful tasks, such as writing, signing signatures, and playing piano. Generally, it is indispensable for pattern generation applications. Since multiple sequences have similar parts, local information such as some previous samples is not sufficient to efficiently reproduce them. Instead, it is necessary to consider global and discriminative information, maybe in the very initial samples of each sequence, to first recognize them, and then predict their next sample based on the current local information. Therefore, the structure of the proposed network consists of two parts: 1) sequence identifier, which computes a novel sequence identity value based on initial samples of a sequence, and detects the sequence identity based on proper fuzzy rules and 2) sequence locator, which locates the input sample in the sequence. Therefore, by integrating outputs of these two parts in fuzzy rules, the network is able to produce the proper output based on the current state of each sequence. To learn the proposed structure, a gradual learning procedure is proposed. First, learning is performed by adding new fuzzy rules, based on coverage measure, using available correct data. Next, the initialized parameters are fine-tuned, by the gradient descent algorithm, based on fed back approximated network output as the next input. The proposed method has a dynamic structure able to learn new sequences online. Finally, to investigate the effectiveness of the presented approach, it is used to simultaneously learn and reproduce multiple sequences in different applications, including sequences with similar parts, different patterns, and writing different letters. The performance of the proposed method is evaluated and compared with other existing methods, including the adaptive network-based fuzzy inference system, GDFNN, CFNN, and long short-term memory (LSTM). According to these experiments, the proposed method outperforms traditional FNNs and LSTM in learning multiple sequences.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call