Abstract

Previous work on crosslingual Relation and Event Extraction (REE) suffers from the monolingual bias issue due to the training of models on only the source language data. An approach to overcome this issue is to use unlabeled data in the target language to aid the alignment of crosslingual representations, i.e., via fooling a language discriminator. However, as this approach does not condition on class information, a target language example of a class could be incorrectly aligned to a source language example of a different class. To address this issue, we propose a novel crosslingual alignment method that leverages class information of REE tasks for representation learning. In particular, we propose to learn two versions of representation vectors for each class in an REE task based on either source or target language examples. Representation vectors for corresponding classes will then be aligned to achieve class-aware alignment for crosslingual representations. In addition, we propose to further align representation vectors for language-universal word categories (i.e., parts of speech and dependency relations). As such, a novel filtering mechanism is presented to facilitate the learning of word category representations from contextualized representations on input texts based on adversarial learning. We conduct extensive crosslingual experiments with English, Chinese, and Arabic over REE tasks. The results demonstrate the benefits of the proposed method that significantly advances the state-of-the-art performance in these settings.

Highlights

  • Introduction performance of the modelsThe middle sub-figure in Figure 2 demonstrates the class misalignment of Relation and Event Extraction (REE) are important representation vectors in crosslingual REE.tasks of Information Extraction (IE), whose goal is To this end, we propose a crosslingual alignment to extract structured information from unstructured method that explicitly conditions on class informatext (Walker et al, 2006)

  • We study cross-lingual transfer learning for three REE tasks as defined in the ACE 2005 dataset (Walker et al, 2006), i.e., Relation Extraction (RE), Event Detection (ED), and Event Argument Extraction (EAE)

  • An overall word representation vector vk for wk is formed by the concatenation: vk = [zk; zpkos; zdkep] where zpkos and zdkep are the embeddings of the universal part of speech and the dependency rethe baseline models explored in this work

Read more

Summary

Problem Statement age of its word-piece representations returned by

We study cross-lingual transfer learning for three REE tasks as defined in the ACE 2005 dataset (Walker et al, 2006), i.e., Relation Extraction (RE), Event Detection (ED), and Event Argument Extraction (EAE). The final representation vector for trigger prediction rEsrDc,k is directly formed from the word representation zk (i.e., rEsrDc,k = zk) Afterward, this prediction representation is fed into a feed-forward network FFNED to obtain a score vector that exhibits the likelihoods for wk to receive possible BIO tags for the predefined event types: sEsrDc,k = FFNED(rEsrDc,k) ∀1 ≤ k ≤ n. Of the given trigger word and entity mentions while GATE (Ahmad et al, 2021): This is the current ysrc represents the golden relation type or argument SOTA model for crosslingual RE and EAE on the role for the input. Given an input sentence w in unlabeled dataset Dtgt = {(xtgt)} (|Dtgt| = Ntgt) xsrc, this model uses the same encoding step with in the target language where xtgt consists of similar mBERT in BERTCRF to obtain the contextualized information as xsrc for the corresponding task

Baseline Methods
Using Unlabeled Target Language Data
Word Category-based Alignment
Findings
Experiments

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.