Abstract

AbstractWith the successful application of question answering (QA) in human-computer interaction scenarios such as chatbots and search engines, medical question answering (QA) systems have gradually attracted widespread attention, because it can not only help professionals make decisions efficiently, but also supply non-professional people advice when they are seeking useful information. However, due to the professionalism of domain knowledge, it is still hard for existing medical question answering systems to understand professional domain knowledge of medicine, which makes question answering systems unable to generate fluent and accurate answers. The goal of this paper is to train the language model on the basis of pre-training. With better language models, we can get better medical question answering models. Through the combination of DAP and TAP, the model can understand the knowledge of the medical domain and task, which helps question answering models generate smooth and accurate answers and achieve good results.KeywordsQuestion answeringDAPTAPPretraining

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call