Abstract
The design of a distributed learning system (DLS) which combines the features of instance-space and hypothesis-space methods is described. This algorithm decomposes a data set of training examples into subsets. After applying an inductive learning program on each subset, it synthesizes the results using a genetic algorithm. It is shown that this parallel distributed approach is more efficient, since each inductive learning program works on only a subset of data. Since the genetic algorithm searches globally in the hypothesis space, this approach gives a more accurate concept description. The implementation of DLS in Common LISP is discussed, and its distributed approach is compared to C4.5 and PLS1 algorithms.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">></ETX>
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.