Abstract

BackgroundTransfer learning aims at enhancing machine learning performance on a problem by reusing labeled data originally designed for a related, but distinct problem. In particular, domain adaptation consists for a specific task, in reusing training data developedfor the same task but a distinct domain. This is particularly relevant to the applications of deep learning in Natural Language Processing, because they usually require large annotated corpora that may not exist for the targeted domain, but exist for side domains.ResultsIn this paper, we experiment with transfer learning for the task of relation extraction from biomedical texts, using the TreeLSTM model. We empirically show the impact of TreeLSTM alone and with domain adaptation by obtaining better performances than the state of the art on two biomedical relation extraction tasks and equal performances for two others, for which little annotated data are available. Furthermore, we propose an analysis of the role that syntactic features may play in transfer learning for relation extraction.ConclusionGiven the difficulty to manually annotate corpora in the biomedical domain, the proposed transfer learning method offers a promising alternative to achieve good relation extraction performances for domains associated with scarce resources. Also, our analysis illustrates the importance that syntax plays in transfer learning, underlying the importance in this domain to privilege approaches that embed syntactic features.

Highlights

  • A bottleneck for training deep learning-based architectures on text is the availability of large enough annotated training corpora

  • We first introduce the embedding input layer, which is common to both approaches (i.e., MultiChannel CNN (MCCNN) and TreeLSTM); we detail how each approach composes sequences of embedding in order to compute a unique vectorial sentence representation; we present the scoring layer, which is common to both approaches

  • We present an analysis of the role of syntactic features in this transfer learning setting

Read more

Summary

Introduction

A bottleneck for training deep learning-based architectures on text is the availability of large enough annotated training corpora. This is especially an issue in highly specialized domains such as those of biomedicine. Domain adaptation consists for a specific task, in reusing training data developedfor the same task but a distinct domain. This is relevant to the applications of deep learning in Natural Language Processing, because they usually require large annotated corpora that may not exist for the targeted domain, but exist for side domains

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call