Lifelong Machine Learning (LML) denotes a scenario involving multiple sequential tasks, each accompanied by its respective dataset, in order to solve specific learning problems. In this context, the focus of LML techniques is on utilizing already acquired knowledge to adapt to new tasks efficiently. Essentially, LML concerns about facing new tasks while exploiting the knowledge previously gathered from earlier tasks not only to help in adapting to new tasks but also to enrich the understanding of past ones. By understanding this concept, one can better grasp one of the major obstacles in LML, known as Knowledge Transfer (KT). This systematic literature review aims to explore state-of-the-art KT techniques within LML and assess the evaluation metrics and commonly utilized datasets in this field, thereby keeping the LML research community updated with the latest developments. From an initial pool of 417 articles from four distinguished databases, 30 were deemed highly pertinent for the information extraction phase. The analysis recognizes four primary KT techniques: Replay, Regularization, Parameter Isolation, and Hybrid. This study delves into the characteristics of these techniques across both neural network (NN) and non-neural network (non-NN) frameworks, highlighting their distinct advantages that have captured researchers’ interest. It was found that the majority of the studies focused on supervised learning within an NN modelling framework, particularly employing Parameter Isolation and Hybrid for KT. The paper concludes by pinpointing research opportunities, including investigating non-NN models for Replay and exploring applications outside of computer vision (CV).