Abstract
Question-answering systems facilitate information access processes by providing fast and accurate answers to questions that users express in natural language. Today, advances in Natural Language Processing (NLP) techniques increase the effectiveness of such systems and improve the user experience. However, for these systems to work effectively, an accurate understanding of the structural properties of language is required. Traditional rule-based and knowledge retrieval-based systems are not able to analyze the contextual meaning of questions and texts deeply enough and therefore cannot produce satisfactory answers to complex questions. For this reason, Transformer-based models that can better capture the contextual and semantic integrity of the language have been developed. In this study, within the scope of the developed models, the performances of BERTurk, ELECTRA Turkish and DistilBERTurk models for Turkish question-answer tasks were compared by fine-tuning under the same hyperparameters and the results obtained were evaluated. According to the findings, it was observed that higher Exact Match (EM) and F1 scores were obtained in models with case sensitivity; the best performance was obtained with 63.99 EM and 80.84 F1 scores in the BERTurk (Cased, 128k) model.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have