Abstract
Recently BERT has achieved a state-of-the-art performance in temporal relation extraction from clinical Electronic Medical Records text. However, the current approach is inefficient as it requires multiple passes through each input sequence. We extend a recently-proposed one-pass model for relation classification to a one-pass model for relation extraction. We augment this framework by introducing global embeddings to help with long-distance relation inference, and by multi-task learning to increase model performance and generalizability. Our proposed model produces results on par with the state-of-the-art in temporal relation extraction on the THYME corpus and is much “greener” in computational cost.
Highlights
Introduction resentationsthat approach has an input representation that is highly wasteful – the sameThe analysis of many medical phenomena heavily Inspired by recent work in Green AI (Schwartz depends on temporal relation extraction from the et al, 2019; Strubell et al, 2019), and one-pass enclinical free text embedded in the Electronic Medi- codings for multiple relations extraction (Wang cal Records (EMRs)
Cations, treatment regimen and outcomes) heavily Inspired by recent work in Green AI (Schwartz depends on temporal relation extraction from the et al, 2019; Strubell et al, 2019), and one-pass enclinical free text embedded in the Electronic Medi- codings for multiple relations extraction (Wang cal Records (EMRs)
A clinical et al, 2019), we propose a one-pass encoding event can be linked to the document creation time mechanism for the CONTAINS relation extraction (DCT) as Document Time Relations (DocTimeRel), task, which can significantly increase the efficiency with possible values of BEFORE, AFTER, OVER- and scalability
Summary
Apache cTAKES (Savova et al, 2010)(http:// ctakes.apache.org) is used for segmenting and tokenizing the THYME corpus in order to generate instances. For the CONTAINS task, we create relation candidates from all pairs of entities within an input sequence. Each candidate is represented by the concatenation of three embeddings, ei, ej, and G, as [G:ei:ej], where G is an average-pooled embedding over the entire sequence, and is different from the embedding of [CLS] token. The [CLS] token is where W L ∈ R3dz×lr , dz is the dimension of the BERT embedding, lr = 3 for the CONTAINS labels, b is the bias, and x is the input sequence. For the DocTimeRel (dtr) task we feed each entity’s embedding, ei, together with the global pooling G, to another linear classifier to predict the entity’s five “temporal statuses”: TIMEX if the entity is a time expression or the dtr type (BEFORE, AFTER, etc.) if the entity is an event:. Where rij is the predicted relation type, dtri and dtrj are the predicted temporal statuses for Ei and Ej respectively, rij is the gold relation type, and dtri and dtrj are the gold temporal statuses. α is a weight to balance CONTAINS loss and dtr loss
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.