Abstract

The Artificial Neural Network (ANN) concept is familiar in methods whose task is, for example, the identification or approximation of the outputs of complex systems difficult to model. In general, the objective is to determine online the adequate parameters to reach a better point-to-point convergence rate, so that this paper presents the parameter estimation for an equivalent ANN (EANN), obtaining a recursive identification for a stochastic system, firstly, with constant parameters and, secondly, with nonstationary output system conditions. Therefore, in the last estimation, the parameters also have stochastic properties, making the traditional approximation methods not adequate due to their losing of convergence rate. In order to give a solution to this problematic, we propose a nonconstant exponential forgetting factor (NCEFF) with sliding modes, obtaining in almost all points an exponential convergence rate decreasing. Theoretical results of both identification stages are performed using MATLAB® and compared, observing improvement when the new proposal for nonstationary output conditions is applied.

Highlights

  • Artificial Neural Networks (ANNs) are computational models based on Biological Neural Networks (BNN) synapses description

  • In spite of all combinations developed, in this paper we propose a novel estimation technique combining three traditional tools: (a) the estimation using Least Square Method (LSM) with instrumental variable, applying the reference signal and the convergence error and its sign, (b) the sliding surface based on error properties that allow developing a new evaluation strategy, minimizing the convergence error in less time than the traditional LSM [13, 14], and (c) an innovative exponential forgetting factor (FF) (EFF) applying traditional sliding modes (SM)

  • To obtain an optimum coefficient which allows a better response, where the output follows the variations generated in the reference, we propose a nonconstant exponential FF (NCEFF), as the following indicates: eff erk k sign (Âk) e(sign(Âk)erk), (6)

Read more

Summary

Introduction

Artificial Neural Networks (ANNs) are computational models based on Biological Neural Networks (BNN) synapses description. Biological neurons include a fire function after concatenating the neuron inputs with innovative conditions, given by external stimuli and excitation signals The answer of this function is transmitted and manipulated, obtaining interconnections with other neurons and integrating a complete network. ANN models are famous because of having the ability to learn and adjust their parameters dynamically, adding factors compound by corrections, or different combining techniques including expert systems [1]. Instead of investing in high computational resources to represent ANN by adding factors to adjust its hidden gains, in [2] are proposed three model approximations allowing the identification selecting, in some sense the gains that the neural net requires, and showing three different representations of ANNs, considering characteristics that make them ideal for modelling and identification and indicating that nonlinear models could be interpreted as an ANN with specific properties. The ANNs by themselves have poor performance in identification and estimation tasks when considering nonlinear systems and are not adequate to accomplish online requirements due to their complex algorithms, which are usually based on stable and invariant conditions [3]

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call