Abstract

Deep learning-based language models (LMs) have transcended the gold standard (human baseline) of SQuAD 1.1 and GLUE benchmarks in April and July 2019, respectively. As of 2022, the top five LMs on the SuperGLUE benchmark leaderboard have exceeded the gold standard. Even people with good general knowledge will struggle to solve problems in specialized fields such as medicine and artificial intelligence. Just as humans learn specialized knowledge through bachelor’s, master’s, and doctoral courses, LMs also require a process to develop the ability to understand domain-specific knowledge. Thus, this study proposes SciDeBERTa and SciDeBERTa (CS) as pretrained LMs (PLMs) specialized in the science and technology domain. We further pretrained DeBERTa, which was trained with a general corpus, with the science and technology domain corpus. Experiments verified that SciDeBERTa (CS) continually pretrained in the computer science domain achieved 3.53% and 2.17% higher accuracies than SciBERT and S2ORC-SciBERT, respectively, which are science and technology domain specialized PLMs, in the task of recognizing entity names in the SciERC dataset. In the JRE task of the SciERC dataset, SciDeBERTa (CS) achieved a 6.7% higher performance than the baseline SCIIE. In the GENIA dataset, SciDeBERTa achieved the best performance compared to S2ORC-SciBERT, SciBERT, BERT, DeBERTa and SciDeBERTa (CS). Furthermore, re-initialization technology and optimizers after Adam were explored during fine-tuning to verify the language understanding of PLMs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call