Abstract

Transfer learning has become a promising field in machine learning owing to its wide application prospects. Its effectiveness has spawned various methodologies and practices. Transfer learning refers to improving the performance of target learners in the target domain by transferring the knowledge contained in different yet related source domains. In other words, we can use data from additional domains or tasks to train a model with superior generalization. Using transfer learning, the dependence on considerable target-domain data can be reduced, thereby constructing target learners. Recently, the fields of computer vision (CV) and natural language processing (NLP) have witnessed the emergence of transfer learning, which has significantly improved the most advanced technology on a wide range of CV and NLP tasks. A typical approach of applying transfer learning to deep neural networks is to fine-tune a pretrained model of the source domain with data obtained from the target domain. This paper proposes a novel framework, based on the fine-tuning approach, called multilevel transfer learning (mLTL). Under this framework, we concluded the crucial findings and principles regarding the training sequence of related domain datasets and demonstrated its effectiveness by performing facial emotion and named entity recognition tasks. According to the experimental results, the deep neural network models using mLTL outperformed the original models on the target tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call