8 May 2008 Dear Editor, CRITERIA FOR EVALUATING BEHAVIOURAL INTERVENTIONS FOR NEURODEVELOPMENTAL DISORDERS Reynolds and Nicolson state that they are accustomed to criticisms of their study; it is important to be clear why the criticisms continue. There are two reasons: first, their intervention study was methodologically deficient on many counts (see Table 1); second, the stakes are high, because their findings are used to promote the Dore approach to treatment of learning disabilities. Thus, on the Dore website (http://www.dore.co.uk), this research is cited as showing that for those completing the intervention, ‘Reading progress improved by 300%, SATS Comprehension improved by 500%, SATS Writing improved by 1700%, dyslexia risk reduced to no risk or borderline risk in all pupils, and attention symptoms improved by over 80%’. Clearly, claims such as these will encourage many parents to sign up for this expensive treatment. The research by Reynolds and colleagues is also used to support the view that the root cause of learning disabilities is an underdeveloped cerebellum. Parents who buy into the Dore programme with the reassurance of a money-back guarantee need to be aware that there is no refund if their child's dyslexia, dyspraxia or Attention-Deficit Hyperactivity Disorder does not improve; a refund is given only if they fail to show gains on vestibular tests used to index cerebellar function. Since vestibular function matures with age,5 and balance and eye movements are explicitly trained in the programme, most children will improve on these tests over an intervention period of 1–2 years. Nicolson and Reynolds 6 argued that one should ‘fight against the attempt to impose the drug-trial methodology on education’ (p. 107), apparently concluding that it is not necessary to include a control group in an intervention study. They regard the possibility of practice effects on their measures as ‘absurd’ (cf.7), and do not accept the need to take into account possible effects of maturation, placebo effects and regression to the mean when interpreting improvements. In their rejoinder, they repeatedly single out results obtained from the uncontrolled part of their study (National Foundation for Educational Research Reading Test, Standard Attainment Tests and attention ratings) when arguing for efficacy. In 2004, an editorial in Nature Neuroscience noted the discrepancy between the strict levels of scientific evidence that are required before the introduction of a new pharmacological intervention, compared with the lack of regulation of non-drug interventions. Torgerson and Torgerson1 argued that randomised controlled trials are the best way of distinguishing true effects from artefactual improvements in educational interventions. The guidelines listed in Table 1, based on their recommendations and the Revised CONSORT statement,2 are proposed as a benchmark. Of course, there is scope for preliminary studies of a new intervention that adopt a less rigorous methodology; if, however, research is cited as evidence of efficacy of a commercial product, then it should conform to agreed scientific standards.
Read full abstract