Abstract

In this paper, the quality and size of training data are investigated for improving the training efficacy of artificial neural network (ANN) to generate Lorenz chaotic system and predict the time series outputs using Nonlinear Auto-Regressive (NAR) model. The designed NAR ANN model will be used for the simulation and analysis of Electroencephalogram (EEG) signals captured from brain activities. A simple ANN topology with a single hidden layer is used, and different ANN architectures with varying number of hidden neurons (n=3 to 16) and input delays (d=1 to 4) are trained with Levenberg-Marquardt algorithm using the MATLAB Neural Network Toolbox. The training results are investigated by comparing two aspects of the training data: size and precision. It is found that for any given ANN architecture, the training performance cannot be improved by solely increasing the training data size in the case of Lorenz system, which is useful knowledge towards reducing the training data size of EEG signals required for training ANN-based NAR model. On the other hand, the training performance can be improved by training data with the same size but better precision. Moreover, when training data with the same size and precision is used, the training performance varies depends on the segment of the Lorenz chaotic trajectory used for the training and can worsen if the changing rate of the selected segment represented by the training data is high.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.