Novikova Olha,
 Suima Irina,
 Shevchyk K.
 Contemporary tendencies in development of machine translation from English into Ukrainian
 Background. Today machine translation is one of types of human activity. Machine translation can greatly facilitate global communication, accelerating the translation process, despite the imperfect quality of the source text. Most often the results of online tools require post-editing and can only be effectively used by those who already speak the target language to some extent. The need for a competent translation is growing every year. Today, the search for an algorithm to deliver this quality of translation is one of the most important questions in computer science and linguistics, therefore informing the scientific relevance of this work.
 The purpose of this paper is to analyze different approaches to the machine translation systems, their characteristics, efficacy and the quality of their output on the examples from Google Translate, Microsoft Translator and Yandex.
 To achieve this aim, the following tasks were set:
 
 to identify the most capable algorithms of MT in use today;
 to compare the results of translations made by online translators;
 to analyze typical stylistic, lexical and grammatical errors that appear in the translation;
 to identify the advantages and disadvantages of online translators;
 to provide recommendations for improving machine translation.
 
 To solve these tasks, we use such methods over the course of this work: descriptive, comparative, analysis, experiment and the method of linguistic interpretation of the results obtained.
 Results. Machine translation of belletristic texts was handled exceptionally well by Yandex and was quite acceptable (barring numerous grammatical errors) on Google’s platform. The outlier in this case is Microsoft Translator, whose mistranslation of realia and same aforementioned mistakes make its output much less readable that its competitors.
 The main problems we see arising from such translations arise from the fact that the systems depend on a large amount of high-quality data sets (i.e., corpora of texts for specific language pairs). The quality of these sets directly influences the quality of the output, which in our case is the quality of the target language text. It can be seen by comparing the average quality of translation between Google’s and Microsoft’s systems. The former one makes less mistakes on average and does not have as many issues in regards to identifying a contextual meaning of a polysemantic lexeme.
 We believe that this issue can be fixed to a certain extent one of two ways: hiring professional translators and linguists to compile those parallel corpora or create a possibility for every person to contribute to this process even on a small scale. The first approach would be very time and labor consuming, but would ultimately provide us with a higher quality data set, which may lead to further improvements in MT. The second is already being deployed by all three major NMT systems but may lead slower progression due to lack of quality control and oversight.
 For us, another potential prospect of this research lies in widening the subject area of texts chosen to reflect the variety of writing styles in use on the Internet right now. Inclusion of texts from confessional, business, and other styles may allow us to highlight more lacunae in the neural network models and to suggest further means of improvement.
 Key words: machine translation, target language, source language, improvement, contextual meaning, communication.
Read full abstract