Despite an inflated number of current publications that appear in an ever increasing number of scientific journals in the grand field of evolutionary biology, it appears that the half-life of most recent studies (and thus their long-term impact) does not live up to the sheer quantity of papers produced. Thereby, phylogenetic systematics, i.e., the study of the interrelationships and the evolutionary history of organisms, constitutes a particularly drastic example, with a vast number of articles constantly appearing that aim to reveal organismal relationships on almost all phyletic levels. This is undoubtedly the consequence of a shift of the raw data that are used to obtain these phylogenies from morphological characters to (gene) sequences, with the amount of the latter still growing exponentially. Accordingly, what once required taxonomical and morphological expertise as well as a sound and thorough application of basal (albeit, of course, hypothesis-laden) skills such as Remane’s criteria of homology (Remane 1952, 1955) in order to produce a data matrix for a phylogenetic analysis in the Hennigian sense (Hennig 1966), the processing of the sequence data that are used today is largely left to computer programs and specifically chosen algorithms. Thus, with an increasing complexity of the respective in silico applications, it becomes more and more difficult for many users to critically review the calculation processes that produce a given phylogenetic tree. In other words: How certain can we be that, e.g., orthology assessments of the gene sequences used in a phylogenetic analysis are correct? How certain can we be that the—obviously true—phylogenetic signal that is hidden in the data is correctly interpreted by all the in silico steps downstream of the actual data (i.e., sequence) acquisition? Not to mention the problems with sequencing errors that may occur prior to any computerized analysis. That such concerns are not merely hypothetical has been shown by several examples. As such, different phylogenetic trees based on (near) identical molecular datasets were produced that largely depended on the in silico tools used (e.g., substitution models applied to the analyses) and/or the outgroups that were chosen (see, e.g., discussion and reanalysis of studies by Ryan et al. 2013 and Moroz et al. 2014 in Pisani et al. 2015). While an a posteriori test as to whether or not the obtained molecular phylogenies make “biological sense” has been suggested as a potential solution to that problem (see Wagele 2005), the subjectivity of such a test is obvious and somewhat contradicts the claim for objectiveness that is often considered one of the major advantages of molecular-based phylogenies. Apart from such problems and inconsistencies that are intrinsic to in silico-based analyses, another, and so far often ignored, issue that is likely to become more evident the larger the molecular datasets that produce given phylogenetic trees are, is that of the decreasing possibility of verification and falsification, a basic requirement of any scientific discipline. We have already entered an era where thousands of genes from dozens of species are used in phylogenetic analyses (e.g., Misof et al. 2014), and only a limited number of people have currently access to computational power with the required capacity to perform such often weeksor monthslong calculations. Accordingly, it seems as if we are indeed reaching the borders of basic scientific principles where colleagues can freely redo analyses published by their peers. There is a true danger that for many, seeing a published tree Editorial note This is the introductory chapter to the Special Issue The new animal phylogeny: The first 20 years