Abstract
This paper adopts the idea of discretising continuous attributes (Fayyad and Irani 1993) and applies it to lazy learning algorithms (Aha 1990; Aha, Kibler and Albert 1991). This approach converts continuous attributes into nominal attributes at the outset. We investigate the effects of this approach on the performance of lazy learning algorithms and examine it empirically using both real-world and artificial data to characterise the benefits of discretisation in lazy learning algorithms. Specifically, we have showed that discretisation achieves an effect of noise reduction and increases lazy learning algorithms‘ tolerance for irrelevant continuous attributes. The proposed approach constrains the representation space in lazy learning algorithms to hyper-rectangular regions that are orthogonal to the attribute axes. Our generally better results obtained using a more restricted representation language indicate that employing a powerful representation language in a learning algorithm is not always the best choice as it can lead to a loss of accuracy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.