Natural Language Generation, a well-established research area within the field of Natural Language Processing, has achieved significant milestones such as in machine translation and paraphrasing. Large Language Models, while improving text quality, show varying effectiveness across different linguistic registers and cultural contexts, raising concerns mostly due to (i) difficulties in understanding the internal workings of the models; (ii) difficulties in understanding the generative processes, which are not transparent; (iii) difficulties in human intervention in the generative function; (iv) propensity for hallucination and incorrect information content, (v) potential irresponsible use of extensive resources from unidentified sources, and (vi) the risk of misuse, among the most obvious hurdles. This article has three main goals: (1) suggest a linguistic approach to Natural Language Generation, (2) explore the broad language spectrum, including formal to informal styles and objective to subjective language, among others, and (3) propose language models inspired by the Logos Model for enhanced transparency, traceability, and customization. The Logos Model is at the core of the best-documented pioneer commercial machine translation system called Logos, which preludes the history of Generative Artificial Intelligence. The open-source version, OpenLogos, offers valuable resources for machine translation, paraphrasing, abstractive summarization, and various other tasks related to Natural Language Generation, and it serves as an excellent resource for training researchers across multiple fields associated with Artificial Intelligence. The OpenLogos resources were made publicly available through the Multi3Generation COST Action (CA18231).