Abstract

There has been a steady surge in various sub-fields of machine learning where the focus is on developing systems that learn in an open-ended manner. This is particularly visible in the fields of language grounding and data stream learning. These systems are designed to evolve as new data arrive, modifying and adjusting learned categories, as well as, accommodating new categories. Although some of the features of incremental learning are present in open-ended learning, the latter can not be characterized as standard incremental learning. This paper presents and discusses the key characteristics of open-ended learning, differentiating it from the standard incremental approaches. The main contribution of this paper is concerned with the evaluation of these algorithms. Typically, the performance of learning algorithms is assessed using traditional train-test methods, such as holdout, cross-validation etc. These evaluation methods are not suited for applications where environments and tasks can change and therefore the learning system is frequently facing new categories. To address this, a well defined and practical protocol is proposed. The utility of the protocol is demonstrated by evaluating and comparing a set of learning algorithms at the task of open-ended visual category learning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.