Abstract

AbstractAs a universal approximator, the first order Takagi–Sugeno fuzzy system possesses the capability to approximate widespread nonlinear systems through a group of IF THEN fuzzy rules. Although group lasso regularization has the advantage of inducing group sparsity and handling variable selection issues, it can lead to numerical oscillations and theoretical challenges in calculating the gradient at the origin when employed directly during training. The paper addresses the aforementioned obstacle by invoking a smoothing function to approximate group lasso regularization. On this basis, a gradient‐based neuro fuzzy learning algorithm with smoothing group lasso regularization for the first order Takagi–Sugeno fuzzy system is proposed. The convergence of the proposed algorithm is rigorously proved under gentle conditions. In addition, experimental outcomes acquired on two approximations and two classification simulations demonstrate that the proposed algorithm outperforms the algorithm with original group lasso regularization and L2 regularization in terms of error, pruned neurons, and accuracy. This is particularly evident in significant advancements in pruned neurons due to group sparsity. In comparison to the algorithm with L2 regularization, the proposed algorithm exhibits improvements of 6.3, 5.3, and 142.6 in pruned neurons during sin function, Gabor function, and Sonar benchmark dataset simulations, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.