The article considers the issues related to the semantic, grammatical, stylistic and technical difficulties currently present in machine translation and compares its four main approaches: Rule-based (RBMT), Corpora-based (CBMT), Neural (NMT), and Hybrid (HMT). It also examines some "open systems", which allow the correction or augmentation of content by the users themselves ("crowdsourced translation"). The authors of the article, native speakers presenting different countries (Russia, Greece, Malaysia, Japan and Serbia), tested the translation quality of the most representative phrases from the English, Russian, Greek, Malay and Japanese languages by using different machine translation systems: PROMT (RBMT), Yandex. Translate (HMT) and Google Translate (NMT). The test results presented by the authors show low "comprehension level" of semantic, linguistic and pragmatic contexts of translated texts, mistranslations of rare and culture-specific words,unnecessary translation of proper names, as well as a low rate of idiomatic phrase and metaphor recognition. It is argued that the development of machine translation requires incorporation of literal, conceptual, and content-and-contextual forms of meaning processing into text translation expansion of metaphor corpora and contextological dictionaries, and implementation of different types and styles of translation, which take into account gender peculiarities, specific dialects and idiolects of users. The problem of untranslatability ('linguistic relativity') of the concepts, unique to a particular culture, has been reviewed from the perspective of machine translation. It has also been shown, that the translation of booming Internet slang, where national languages merge with English, is almost impossible without human correction.