Abstract
AbstractIn this paper, we propose a new type of membership functions (MSFs) and their efficient use to improve optimization of fuzzy reasoning using a steepest descent method. In self‐tuning of fuzzy rules using the steepest descent method, an algorithm to avoid suboptimal solutions by modifying learning coefficients has been proposed, where piecewise linear MSFs were introduced. In such an algorithm, when learning data have a radically changing distribution, it is impossible to avoid suboptimal solutions.To overcome this problem, we propose to apply double right triangular MSFs to the self‐tuning of fuzzy reasoning. By using these MSFs, radically changing grades can be represented easily. In addition, by a technique of simulated annealing (SA) we propose to move the peak positions of the MSFs according to the progress of learning so as to arrange the MSFs in certain positions where the learning data are changing radically. Compared with the algorithm using piecewise linear MSFs, this new algorithm makes it possible to avoid suboptimal solutions more effectively. The advantages of this new technique are demonstrated by numerical examples involving function approximations. © 2003 Wiley Periodicals, Inc. Electr Eng Jpn, 144(4): 63–74, 2003; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/eej.10168
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.