Abstract
Gradient algorithms are popular because they are simple, easy to understand, and solve a large class of problems. The performance and adaptive weights determine the nature of the performance surface. When performance is a quadratic function of the weight settings, then it is a bowl-shaped surface with a minimum at the 'bottom of the bowl.' In this case, local optimization methods, such as gradient methods, can find the bottom. In the event that the performance surface is irregular, having several relative optima or saddle points, then the transient response of the gradient-based minimum-seeking algorithms get stuck in a local minimum. The gradient-based algorithms considered in this chapter are as follows: least mean square (LMS); Howells-Applebaum loop; differential steepest descent (DSD); accelerated gradient (AG); and steepest descent for power minimization.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.