Abstract

AbstractDue to advancements in data collection, storage, and processing techniques, machine learning has become a thriving and dominant paradigm. However, one of its main shortcomings is that the classical machine learning paradigm acts in isolation without utilizing the knowledge gained through learning from related tasks in the past. To circumvent this, the concept of Lifelong Machine Learning (LML) has been proposed, with the goal of mimicking how humans learn and acquire cognition. Human learning research has revealed that the brain connects previously learned information while learning new information from a single or small number of examples. Similarly, an LML system continually learns by storing and applying acquired information. Starting with an analysis of how the human brain learns, this paper shows that the LML framework shares a functional structure with the brain when it comes to solving new problems using previously learned information. It also provides a description of the LML framework, emphasizing its similarities to human brain learning. It also provides citation graph generation and scientometric analysis algorithms for the LML literatures, including information about the datasets and evaluation metrics that have been used in the empirical evaluation of LML systems. Finally, it presents outstanding issues and possible future research directions in the field of LML.This article is categorized under: Technologies > Machine Learning

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call