ABSTRACT Algorithmic explainability has become one of the key topics of the last decade of the discourse about automated decision making (AMD, machine-made decisions). Within this discourse, an important subfield deals with the explainability of machine-made decisions or outputs that affect a person’s legal position or have legal implications in general – in short, the algorithmic legal decisions. These could be decisions or recommendations taken or given by software which support judges, governmental agencies, or private actors. These could involve, for example, the automatic refusal of an online credit application or e-recruiting practices without any human intervention, or a prediction about one’s likelihood of recidivism. This article is a contribution to this discourse, and it claims, that as explainability has become a prominent issue in hundreds of ethical codes, policy papers and scholarly writings, so it has become a ‘semantically overloaded’ concept. It has acquired such a broad meaning, which overlaps with so many other ethical issues and values, that it is worth narrowing down and clarifying its meaning. This study suggests that this concept should be used only for individual automated decisions, especially when made by software based on machine learning, i.e. ‘black box-like’ systems. If the term explainability is only applied to this area, it allows us to draw parallels between legal decisions and machine decisions, thus recognising the subject as a problem of legal reasoning, and, in part, linguistics. The second claim of this article is, that algorithmic legal decisions should follow the pattern of legal reasoning, translating the machine outputs to a form, where the decision is explained as applications of norms to a factual situation. Therefore, as the norms and the facts should be translated to data for the algorithm, so the data outputs should be back-translated to a proper legal justification.