The CREST trial comparing carotid artery stenting (CAS) and carotid endarterectomy (CEA) for the treatment of high-grade carotid stenosis has been interpreted by many to demonstrate the equivalence of the two procedures [1]. This equivalence of CAS and CEA is supported by the recent American Heart Association (AHA) guideline for treating carotid stenosis, which has been endorsed by 13 other prestigious US organizations [2]. This article will examine this ‘equivalence’ and the nature and validity of the level one evidence, which is purported to support it. Level one evidence and the randomized controlled trials (RCTs) that comprise it are widely considered to be the best basis for determining medical practice. This is particularly true when the RCTs are published in leading journals such as the New England Journal of Medicine or Lancet. Such trials are viewed by many as the ‘holy grail’ of medicine. However, RCTs can have many flaws that render them obsolete, nonapplicable or overtly misleading. More importantly, RCTs can be spun or misinterpreted by their authors or others so that they exert an effect on practice trends or standards quite unjustified by their data. Possible flaws in RCTs are of two types: first, are the timeliness flaws that can occur when progress is made in the treatment-underevaluation arm or the control arm of RCTs. Examples would be the early trials of CAS versus CEA. If progress in CAS technology or patient selection occurs, a trial such as EVA-3S, showing CAS inferiority becomes invalid [3]. By contrast, the landmark trials showing CEA to be superior to medical treatment in preventing strokes have become obsolete because dramatic recent progress has been made with medical treatment since patients were entered into these trials [4–6]. Second are the many design flaws that can also impair the validity of RCTs. These include patient selection flaws (e.g., in the SAPPHIRE trial, patients were selected for randomization only if they were high risk for CEA) [7]. SAPPHIRE also included 71% asymptomatic patients in whom the 30-day periprocedural stroke, death and myocardial infarction (MI) rates (~5 and ~10% for CAS and CEA, respectively) were so high that no invasive procedure was justified [7]. Good medical treatment would have served these patients better. CREST also had patient selection flaws. It was originally designed to compare CAS and CEA only in symptomatic patients. However, when adequate numbers of symptomatic patients could not be recruited, asymptomatic patients were added, thereby diluting the power of the study and impairing the statistical significance of some of its results (Table 1) [1]. Other design f laws include: questionable competence of operators in a trial (e.g., the CAS operators in the EVA-3S [3] and ICSS [8] trials); problems with randomization (e.g., SAPPHIRE in which only 10% of eligible patients were randomized [7]); and questionable applicability of RCT results to real-world practice (e.g., CAS operators in CREST were highly vetted and more skilled than many others performing the procedure) [1]. There are also idiosyncratic flaws, as in the EVAR 2 trial in patients unfit for open abdominal aortic aneurysm repair [9]. Although this trial, published in Lancet, showed endovascular aneurysm repair to have similar mortality to no treatment, half the deaths in the group randomized to endovascular aneurysm repair occurred from rupture during a lengthy (average 57 days) waiting period before treatment. Had these deaths been prevented by a more timely