The paper expands on the analysis of key projects adoring the machine translation (MT) hall of fame and their role in addressing practical tasks. The most successful initiatives suggest that the fledgling MT was contingent on the level of entropy, a.k.a. random nature of natural languages: the lower the indicator, the higher the predictability of the text, and by implication the efficiency of the system. It accounts for the success of the first Georgetown-IBM experiment and Canada’s METEO-1. The letter grew into a full-fledged system that for almost a quarter of the 20th century, provided English-French-English translations of weather bulletins, boasting high language predictability. Although, in between them the 1964 ALPAC report sowed a seed of doubt of the MT validity, it never aimed at killing the research area at all. On the contrary, it highlighted technics and applications, where the technology had demonstrated promising results, including raw MT, post-edited MT, and M-AT. The authors note a cyclic nature of the development of MT-powered methods and technologies. Today’s combination of resources and the way they are used are different very little from those employed in the past century. What makes them stand apart is the maturity of MT technologies, which made it through rulebased, direct, corpus-based, and knowledge-based translation to SMT and eventually to NMT. It has been established that the improved performance comes at a cost of more elaborate and larger data sets, tagged, marked up and annotated for automated use in language models. Taking advantage of these as well as artificial intelligence (AI), the authors venture into modeling basic text processing scenarios in a bilingual environment. This results in recommendations as to future paths for the improvement of MT technologies in the hands of professional translators by fine-tuning language models individually and pursuing post-editing (PEMT) and pre-editing practices paving the way for more complex transformations and lower equivalence levels.