Abstract

In the field of NLP (Natural Language Processing) research, the use of a neural network has become important. The neural network is widely used in the semantic analysis of texts in different languages. In connection with the actualization of the processing of big data in the Kazakh language, a neural network was built for deep learning. In this study, the object is the learning process of a deep neural network, which evaluates the algorithm for constructing an LDA model. One of the most problematic places is determining the correct arguments, which, when compiling the model, will give an estimate of the algorithm’s performance. During the research, the compile () method from the Keras modular library was used, the main arguments of which are the loss function, optimizers, and metrics. The neural network is implemented in the Python programming language. The main arguments of the neural network deep learning compiler for evaluating the LDA model is the selection of arguments to obtain the correct evaluation of the algorithm of the constructed model using deep learning of the neural network. A corpus of text in the Kazakh language with no more than 8000 words is presented as learning data. Using the above methods, an experiment was carried out on the selection of arguments for the model compiler when learning a text corpus in the Kazakh language. As a result, the optimizer – SGD, the loss function – binary_crossentropy, and the estimation metric – ‘cosine_proximity’ were chosen as the optimal arguments, which, as a result of learning, showed a tendency to 0 loss (errors)=0.1984, and cosine_proximity (learning accuracy)=0.2239, which is considered acceptable learning measures. The results indicate the correct choice of compilation arguments. These arguments can be applied when conducting deep learning of a neural network, where the sample data is a pair of «topic and keywords».

Highlights

  • The main action in deep learning is to adjust the weights to reduce error using a series of learning examples, which in turn boils down to finding the correlation bet­ ween the input and output layers

  • The object of research is the process of learning a deep neural network, which evaluates the algorithm for constructing an LDA model

  • The aim of research is to select the optimal parameters to determine the accuracy of the loss of errors in deep learning of a neural network

Read more

Summary

Introduction

The main action in deep learning is to adjust the weights to reduce error using a series of learning examples, which in turn boils down to finding the correlation bet­ ween the input and output layers. The main trick of deep learning is using the estimate to adjust the weights to reduce losses. This adjustment is carried out by an optimizer that implements the so-called back propagation algorithm: the central deep learning algorithm [2]. It is relevant to address the following issues: – studying the Keras library; – study of the application of the loss function in deep learning of a neural network; – research of optimization algorithms; – research of quality indicators. The object of research is the process of learning a deep neural network, which evaluates the algorithm for constructing an LDA model. The aim of research is to select the optimal parameters to determine the accuracy of the loss of errors in deep learning of a neural network

Methods of research
Research results and their discussion
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call