Abstract
The members of the editorial board of Cephalalgia have to nominate every year the best publication of the year. The paper by Koppen et al. from Leiden University in The Netherlands will be very high on my list of nominations (1). The authors studied a large group of patients with migraine and compared them to healthy controls and patients with familial hemiplegic migraine. They tested the hypothesis that migraine patients have mild cerebellar dysfunction. This was suggested by a number of small studies in patients recruited from hospitals or headache centres. These studies by their nature are biased. Patients will only visit a headache centre if the treatment is ineffective or has adverse effects. The only way to get around this problem is to perform population-based studies. In addition, investigators are usually not blinded as to whether a particular person participating in a study has migraine or is a control. Furthermore, most studies investigate only one aspect of cerebellar function, such as eye movements or postural control. The study from Leiden avoided all of these shortcomings. The migraine patients were recruited from a population-based sample. The sample size was large enough to fulfil the requirements of a proper statistical analysis. They tested five domains of cerebellar function with validated instruments, namely motor skills, visuospatial abilities, learning-dependent timing, limb learning and balance control. These tests were performed in a blinded fashion. In addition, all participants were investigated by magnetic resonance imaging and again the analysis of imaging was conducted in a blinded fashion. Considering the outstanding methodology of this study, it is not surprising that patients with migraine were not different from controls in cerebellar function tests. Patients with hemiplegic migraine, however, showed mild cerebellar dysfunction. This study contradicts 11 other studies (including one that we performed). A frequently asked question in science is: which results should we trust? Should we simply count the numbers of studies? In this case, the vote would be 11:1, so no, we should not count studies. We should rely on proper science. This study avoided all of the shortcomings of the 11 other studies and therefore should be trusted. An important lesson to be learned is that small studies from headache centres are, at best, hypothesis generating. They need to be validated via a stringent methodology in populationbased samples.
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have