The tremendous growth and impact of fake news as a hot research field gained the public’s attention and threatened their safety in recent years. However, there is a wide range of developed fashions to detect fake contents, either those human-based approaches or machine-based approaches; both have shown inadequacy and limitations, especially those fully automatic approaches. The purpose of this analytic study of media news language is to investigate and identify the linguistic features and their contribution in analyzing data to detect, filter, and differentiate between fake and authentic news texts. This study outlines promising uses of linguistic indicators and adds a rather unconventional outlook to prior literature. It utilizes qualitative and quantitative data analysis as an analytic method to identify systematic nuances between fake and factual news in terms of detecting and comparing 16 attributes under three main linguistic features categories (lexical, grammatical, and syntactic features) assigned manually to news texts. The obtained datasets consist of publicly available right documents on the Politi-fact website and the raw (test) data set collected randomly from news posts on Facebook pages. The results show that linguistic features, especially grammatical features, help determine untrustworthy texts and demonstrate that most of the test news tends to be unreliable articles.