Abstract

In our former study, we have already proposed a parallel distributed model for the speedup of fuzzy genetics-based machine learning (GBML). Our model is an island model for parallel implementation of fuzzy GBML algorithms where a population is divided into multiple subpopulations. A single subpopulation is assigned to each island. Training data are also divided and distributed over the islands. When we have N islands (i.e., N CPUs for parallel computation), the speedup is the order of the square of N. This is because both the population and the training data are divided into N subsets. One characteristic feature of our parallel distributed model is training data rotation over the islands. Each of the N training data subsets is assigned to one of the N islands. The assigned training data subsets are rotated over the islands periodically (e.g., every 100 generations). This means that the environment of each island is changed periodically. The focus of this paper is how to update existing fuzzy rules at each island after the training data rotation. One extreme setting is to totally update fuzzy rules using the newly assigned training data subset. Another extreme setting is to use existing fuzzy rules with no changes. In this paper, we examine incremental learning, which can be viewed as an intermediate mechanism between the two extreme settings.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.