Abstract

We analyze generalization and learning in XCS with gradient descent. At first, we show that the addition of gradient in XCS may slow down learning because it indirectly decreases the learning rate. However, in contrast to what was suggested elsewhere, gradient descent has no effect on the achieved generalization. We also show that when gradient descent is combined with roulette wheel selection, which is known to be sensitive to small values of the learning rate, the learning speed can slow down dramatically. Previous results reported no difference in the performance of XCS with gradient descent when roulette wheel selection or tournament selection were used. In contrast, we suggest that gradient descent should always be combined with tournament selection, which is not sensitive to the value of the learning rate. When gradient descent is used in combination with tournament selection, the results show that (i) the slowdown in learning is limited and (ii) the generalization capabilities of XCS are not affected.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.