Abstract

AbstractQuestion answering (QA) Transformer-based models might become efficient in inclusive education. For example, one can test and tune such models with small closed-domain datasets before the implementation of a new system in an inclusive organization. However, studies in the sociomedical domain show that such models can be unpredictable. They can mislead a user or evoke aversive emotional states. The paper addresses the problem of investigating safety-first QA models that would generate user-friendly outputs. The study aims to analyze the performance of SOTA Transformer-based QA models on a custom dataset collected by the author of the paper. The dataset contains 1 134 question-answer pairs about autism spectrum disorders (ASD) in Russian. The study presents the validation and evaluation of extractive and generative QA models. The author used transfer learning techniques to investigate domain-specific QA properties and suggest solutions that might provide higher QA efficiency in the inclusion. The study shows that although generative QA models can misrepresent facts and generate false tokens, they might bring diversity in the system outputs and make the automated QA more user-friendly for younger people. Although extractive QA is more reliable, according to the metric scores presented in this study, such models might be less efficient than generative ones. The principal conclusion of the study is that a combination of generative and extractive approaches might lead to higher efficiency in building QA systems for inclusion. However, the performance of such combined systems in the inclusion is yet to be investigated.KeywordsQuestion answeringDialogue systemTransformer

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call