Abstract
A gradient aggregate asymptotical smoothing algorithm is proposed for training fuzzy neural networks (FNNs), which develops the smoothing algorithm (SA) described in Li et al. (2017). By introducing the framework of asymptotical approximation to the algorithm, any degree of approximation for the error function of max–min FNNs can be obtained by an aggregate smoothing function with a variable precision parameter. The algorithm minimizes a sequence of asymptotically approximate functions using the steepest descent algorithm for solving a nondifferentiable max–min optimization problem of max–min FNNs. The proposed update rule on the precision parameter reconciles the conflict between the high-accuracy approximation and the numerical ill-conditioning. The algorithm is globally convergent under Armijo line search. As shown in the simulation results for three artificial examples and a real-world problem in fault diagnosis, compared with SA, the proposed algorithm can efficiently deal with the numerical oscillations and has better performance.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have