Abstract

ABSTRACT Named entity recognition (NER) is a crucial step in extracting medical information from Chinese text, and fine-tuning large language models (LLMs) for this task is an effective approach. However, full parameter fine-tuning can potentially damage the model’s original parameters, resulting in catastrophic forgetting. To overcome this challenge, we introduce a novel adapter-based fine-tuning approach. Our adapter is integrated into the first and last transformers of the LLM, operating in parallel to the feed-forward network (FFN), following multi-head attention. It mirrors the FFN’s structure and uses the FFN’s weights for initializing. Additionally, to further enhance performance, we incorporate prefix embeddings into the first and last transformers. Our experiments on the Chinese medical NER benchmark demonstrate that our adapter, combined with prefix embeddings, achieves the highest F1-score of 65.90%, surpassing prompt templates (21.99%), in-context learning (18.65%), P-tuning (63.03%), and the benchmark for the Chinese medical NER task (62.40%). These results indicate that our adapter effectively fine-tunes the LLM for Chinese medical NER while preserving the original parameters.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.