Abstract

In this paper, a second-order hyperparameter tuning method is proposed to improve the performance of online gradient-descent optimization. Second-order gradient information of a cost function obtained from extremum seeking optimization is embedded into the adaptation of states and parameters. Thus, a faster adaptation capability is provided without computing the inverse Hessian matrix. The convergence property of the adaptation dynamics via proposed hyperparameter is shown using Lyapunov approach. The proposed hyperparameters and conventional learning rates are compared in numerical applications of model-based estimation and adaptive estimation as follows: i) model-based synchronization of chaotic Lü-systems with time-varying parameters is performed by using an efficient nonlinear observer, ii) an adaptive fuzzy neural-network observer based state estimation is conducted for unknown Duffing oscillator. In both cases, online gradient-descent adaptations are boosted using the proposed hyperparameter and conventional learning rates and their capabilities are measured in terms of root-mean squared-error performance. As a result, the proposed hyperparameter tuning method presented more accurate performances where application results are illustrated in figures and in a table.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call