Abstract

The aim of the study is to obtain a more lightweight language model that is comparable in terms of EM and F1 with the best modern language models in the task of finding the answer to a question in a text in Russian. The results of the work can be used in various question-and-answer systems for which response time is important. Since the lighter model has fewer parameters than the original one, it can be used on less powerful computing devices, including mobile devices. In this paper, methods of natural language processing, machine learning, and the theory of artificial neural networks are used. The neural network is configured and trained using the Torch and Hugging face machine learning libraries. In the work, the DistilBERT model was trained on the SberQUAD dataset with and without distillation. The work of the received models is compared. The distilled DistilBERT model (EM 58,57 and F1 78,42) was able to outperform the results of the larger ruGPT-3-medium generative network (EM 57,60 and F1 77,73), despite the fact that ruGPT-3-medium had 6,5 times more parameters. The model also showed better EM and F1 metrics than the same model, but to which only conventional training without distillation was applied (EM 55,65, F1 76,51). Unfortunately, the resulting model lags further behind the larger robert discriminative model (EM 66,83, F1 84,95), which has 3,2 times more parameters. The application of the DistilBERT model in question-and-answer systems in Russian is substantiated. Directions for further research are proposed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call