Abstract
The adjustment of parameters within a function for modeling a set of observations is a very frequent task in many applied areas of science. There are sophisticated techniques to reach this goal, such as regression, use of gradients, neural networks, neurofuzzy modeling, genetic algorithms, swarm optimization, etc. In this paper numerical simulations are done about the efficiency and capacity of the Least Mean Square (LMS) algorithm to find an optimal set of parameters for adjusting a function to a set of observed data. Although the LMS method has been very used for minimization of errors and extraction of noise in signal processing systems, its capacities for regression and approximation have been not very often explored. Using simple examples, conditions on which the learning parameters can be adjusted to model a set of training data are explored, using a iterative learning process where the approximation of the stochastic error is recalculated immediately after any parameter is actualized. A description of the speed for convergence, as a function of the learning rate, is shown for the cases under study.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have