Abstract

Supervised learning models are typically trained on a single dataset and the performance of these models rely heavily on the size of the dataset i.e., the amount of data available with ground truth. Learning algorithms try to generalize solely based on the data that it is presented with during the training. In this work, we propose an inductive transfer learning method that can augment learning models by infusing similar instances from different learning tasks in Natural Language Processing (NLP) domain. We propose to use instance representations from a source dataset, without inheriting anything else from the source learning model. Representations of the instances of source and target datasets are learned, retrieval of relevant source instances is performed using soft-attention mechanism and locality sensitive hashing and then augmented into the model during training on the target dataset. Therefore, while learning from a training data, we also simultaneously exploit and infuse relevant local instance-level information from an external data. Using this approach we have shown significant improvements over the baseline for three major news classification datasets. Experimental evaluations also show that the proposed approach reduces dependency on labeled data by a significant margin for comparable performance. With our proposed cross dataset learning procedure we show that one can achieve competitive/better performance than learning from a single dataset.

Highlights

  • A fundamental issue with performance of supervised learning techniques is the requirement of enormous amount of labeled data, which in some scenarios maybe expensive or impossible to acquire

  • We propose a deep transfer learning method that can enhance the performance of learning models by incorporating information from a secondary dataset belonging to a similar domain

  • We present our approach in an inductive transfer learning (Pan and Yang, 2010) framework, with a labeled source (DS domain and task TS) and target (DT domain and task TT ) dataset, the aim is to boost the performance of target predictive function fT (·) using available knowledge in DS and TS, given TS = TT

Read more

Summary

Introduction

A fundamental issue with performance of supervised learning techniques (like classification) is the requirement of enormous amount of labeled data, which in some scenarios maybe expensive or impossible to acquire. We present our approach in an inductive transfer learning (Pan and Yang, 2010) framework, with a labeled source (DS domain and task TS) and target (DT domain and task TT ) dataset, the aim is to boost the performance of target predictive function fT (·) using available knowledge in DS and TS, given TS = TT. We utilize the instancelevel information in the source dataset, and make the newly learnt target instance representation similar to the retrieved source instances. This allows the learning algorithm to improve generalization across the source and target datasets.

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call