The normality assumption postulates that empirical data derives from a normal (Gaussian) population. It is a pillar of inferential statistics that enables the theorization of probability functions and the computation of p-values thereof. The breach of this assumption may not impose a formal mathematical constraint on the computation of inferential outputs (e.g., p-values) but may make them inoperable and possibly lead to unethical waste of laboratory animals. Various methods, including statistical tests and qualitative visual examination, can reveal incompatibility with normality and the choice of a procedure should not be trivialized. The following minireview will provide a brief overview of diagrammatical methods and statistical tests commonly employed to evaluate congruence with normality. Special attention will be given to the potential pitfalls associated with their application. Normality is an unachievable ideal that practically never accurately describes natural variables, and detrimental consequences of non-normality may be safeguarded by using large samples. Therefore, the very concept of preliminary normality testing is also, arguably provocatively, questioned.