Abstract

Chemical patents are an important resource for chemical information. However, few chemical Named Entity Recognition (NER) systems have been evaluated on patent documents, due in part to their structural and linguistic complexity. In this paper, we explore the NER performance of a BiLSTM-CRF model utilising pre-trained word embeddings, character-level word representations and contextualized ELMo word representations for chemical patents. We compare word embeddings pre-trained on biomedical and chemical patent corpora. The effect of tokenizers optimized for the chemical domain on NER performance in chemical patents is also explored. The results on two patent corpora show that contextualized word representations generated from ELMo substantially improve chemical NER performance w.r.t. the current state-of-the-art. We also show that domain-specific resources such as word embeddings trained on chemical patents and chemical-specific tokenizers, have a positive impact on NER performance.

Highlights

  • Chemical patents are an important starting point for understanding of chemical compound purpose, properties, and novelty

  • New chemical compounds are often initially disclosed in patent documents; it may take 1-3 years for these chemicals to be mentioned in chemical literature (Senger et al, 2015), suggesting that patents are a valuable but underutilized resource

  • The results show that contextualized word representations help improve chemical Named-Entity Recognition (NER) performance substantially

Read more

Summary

Introduction

Chemical patents are an important starting point for understanding of chemical compound purpose, properties, and novelty. Authors strive to make their words as clear and straight-forward as possible, whereas patent authors often seek to protect their knowledge from being fully disclosed (Valentinuzzi, 2017). In tension with this is the need to claim broad scope for intellectual property reasons, and patents typically contain more details and are more exhaustive than scientific papers (Lupu et al, 2011). The results show that word embeddings that are pre-trained on chemical patents outperform embeddings pre-trained on biomedical datasets, and using tokenizers optimized for the chemical domain can improve NER performance in chemical patent corpora

Related work
Our empirical methodology
Dataset
Tokenizers
Models
Pre-trained word embeddings
Character-level representation
Implementation details
Main Results
Error Analysis
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.