Abstract

Neural language representation models such as BERT have recently shown state of the art performance in downstream NLP tasks and bio-medical domain adaptation of BERT (Bio-BERT) has shown same behavior on biomedical text mining tasks. However, due to their large model size and resulting increased computational need, practical application of models such as BERT is challenging making smaller models with comparable performance desirable for real word applications. Recently, a new language transformers based language representation model named ELECTRA is introduced, that makes efficient usage of training data in a generative-discriminative neural model setting that shows performance gains over BERT. These gains are especially impressive for smaller models. Here, we introduce two small ELECTRA based model named Bio-ELECTRA and Bio-ELECTRA++ that are eight times smaller than BERT Base and Bio-BERT and achieves comparable or better performance on biomedical question answering, yes/no question answer classification, question answer candidate ranking and relation extraction tasks. Bio-ELECTRA is pre-trained from scratch on PubMed abstracts using a consumer grade GPU with only 8GB memory. Bio-ELECTRA++ is the further pre-trained version of Bio-ELECTRA trained on a corpus of open access full papers from PubMed Central. While, for biomedical named entity recognition, large BERT Base model outperforms Bio-ELECTRA++, Bio-ELECTRA and ELECTRA-Small++, with hyperparameter tuning Bio-ELECTRA++ achieves results comparable to BERT.

Highlights

  • Transformers based language representation learning methods such as Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al, 2019) are becoming increasingly popular for downstream biomedical NLP tasks due to their performance advantages (Lee et al, 2019)

  • We introduce two small and efficient ELECTRA based domain-specific language representation models trained on PubMed abstracts and on PubMed Central (PMC) open-access full papers, respectively, with a domain specific vocabulary achieving comparable or better results on several biomedical text mining tasks to BERT Base model that have 8 times more parameters resulting in 8 times decrease in inference time

  • Bio-ELECTRA++ named entity recognition (NER) performance can be significantly improved by hyperparameter tuning to achieve comparable performance to BERT

Read more

Summary

Introduction

Transformers based language representation learning methods such as Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al, 2019) are becoming increasingly popular for downstream biomedical NLP tasks due to their performance advantages (Lee et al, 2019). The performance of these models comes at a steep increase in computation cost both at training and inference time. A small and efficient model without going through the trouble of training a large model and mimicking it in a smaller model is more preferable

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.