DOI: 10.5935/1678-9741.20120053 RBCCV 44205-1390It is commonplace to say that the doctor should neverstop learning. We partly agree with it. There is no use inreading and rereading hundreds of scientific articles andtextbooks which are based on a model full of imperfections,misconceptions and pitfalls, which is the Evidence-BasedMedicine (EBM), without a critical sense of theseinformation. Previously believed to be a safe route to follow ,as we now have a road full of imperfections that can lead tounpredictable destinations, if used by someone who is notfully aware not only of conceptual errors and ethicaldilemmas to which it is always exposed, but mainly of themethodological and statistical artifacts of this model, nowalmost ubiquitous in medical papers.In 1998, we graduated from Medicine at the PontificalCatholic University of Parana, we would joyfully take part inany scientific discussion, especially if we knew a meta-analysisor randomized controlled trial that addressed the topic underdiscussion, which we would only reveal during the argument.This trick was quite effective. The highest levels of evidenceof the emerging EBM were rarely questioned. The years ofexperience came and along with it the writer Malcolm Gladwell[1], based on studies of the psychologist K. A. Ericsson [2],defined as a deliberate practice, which is essential for us tobecome unique in what we are accustomed to call theprofession. After ten years of operation associated withconsiderable reading on the subject, we could understand themajor flaws and imperfections that underpin the EBM as amodel for practice and for teaching physicians. And the inloco observation of constant failures of therapies consideredby EBM as the gold standard provided us with the necessaryobjective support for the hypothesis that these deficiencieshave obvious and direct consequences in the evolution ofour patients.EBM standardized number of rules, disseminated inbooks, textbooks and courses all around the world. But itseems that often forgets to follow them. Initially, the P,usually called the probability that was given the noble taskto of having to always be smaller or larger than 0.05, so thatthe arguments it brings may or may not have any scientificvalidity . Two conceptual errors allow us to unravel the so-called fallacy of P . Not even its dif fuser (the P concept wasdeveloped by the British Karl Pearson), the Englishstatistician R. A. Fisher emphatically ruled that we shouldhave this value for statistical significance [3]. It is worsethan that, the value of P, however small, does not refer tothe null hypothesis (H0), but the data [4]. For example:imagine that, independently on the statistical test we use,correct or not, we get a P of 0.001. Conclusion (right): onceH0 is zero, the probability of our data has occurred is 1 to1000. In other words, H0 is always false. And no matterhow small the P is, Fisher never predicted the existence ofalternative hypothesis (H1). At first sight, it appears onlyconceptual preciosity, because it only adds to theunreliability of the P proposed by Fish: in some situations,according to pre-test probability , even when it has the valueof 0.05, the chance to confirm an error (nonexistent) H1 canreach 50% [5]. Tossing a coin and trust its result seemsmore sensible to be equally “accurate” and indeed moreeconomical. This is so confusing that made the famousintellectual Jacob Cohen wonder why it would be relevantto test it if H0 is always zero. [6] We return to Hume’sproblem of induction, which could not even be solved byPopper [7].