Abstract

Tani et al.'s recurrent neural network with parametric bias (RNNPB) is able to learning different time series patterns, and it has been successfully applied in action learning of robots. In this paper, we propose a novel type RNNPB using Elmann type model instead of Jordan type in the conventional model. The proposed structure makes it easy to use error back-propagation (BP) learning algorithm which has lower computational cost than back-propagation time through (BPTT) method used in Tani et al.'s model. The effectiveness of the modified RNNPB was confirmed by its application to gesture learning experiment using a humanoid robot.

Highlights

  • Artificial neural networks (ANNs) have been studied since 1950s and many models of them have been successfully applied to adaptive control, time series forecasting, pattern recognition, and many other fields

  • After the recurrent neural network with parametric bias (RNNPB) is trained by different teacher signals using different parametric bias” (PB) values, the network is able to generate different time series patterns according to the PB values

  • We propose to modify RNNPB using Elman type recurrent neural networks (RNNs) (3) (4) instead of Jordan type used in the original RNNPB

Read more

Summary

Introduction

Artificial neural networks (ANNs) have been studied since 1950s and many models of them have been successfully applied to adaptive control, time series forecasting, pattern recognition, and many other fields. Among models of ANNs, recurrent neural networks (RNNs) are suitable to simulate dynamic systems or control unknown systems as “inverse models”. Jordan type RNN(1) (2) and Elman type RNN(3) (4) are the most well-known feed forward multi-layer models of RNNs. as supervised learning models, these RNNs are usually train to be identified models to certain systems. We propose to modify RNNPB using Elman type RNN (3) (4) instead of Jordan type used in the original RNNPB. The main merit of this modification is the simplicity of Elman model and the training method can be the well-known “error back-propagation” (BP) algorithm (12), which is more simple than BPTT. Experiment results showed the learning performance of the proposed model was higher than the conventional RNNPB

Structure
Experiments
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call