Bergin and colleagues challenge the academic epilepsy community to move outside our box of how we do clinical studies to help address common, important clinical problems for which we lack data: go to the net. It is an awesome idea. The nemesis, the devil, lies in the detail. No one can doubt the power of the Internet: its reach, its exponential capacities, its speed, its potential to answer questions. Academic medicine must embrace it. But how? Bergin and colleagues suggest that randomized prospective research protocols could be successfully executed to study problems such as the comparative efficacy and toxicity of different two antiepileptic drug therapies as well as different therapies for specific etiologies of epilepsy. As a resident (OD), I was struck by a distinguished epileptologist, a basic scientist, who lectured on “Rational Polytherapy.” This doctor published articles in leading scientific journals and understood the burden of scientific proof, yet gave an hour lecture on the wonderful merits of some antiepileptic drug combinations and trashed others as toxic and illogical. To summarize, if the presumed mechanisms of action were different, he liked the combos. If they were the same, he disliked them. What about clinically proven breakthroughs in cancer therapy that targeted multiple sites in a specific pathway, all acted by the same general “mechanism of action.” That was 1986 – two decades later and the evidence for “rational polytherapy” remains a cacophony of experts opining in the dark. The Internet could waken us from our sleep, not only by giving us a new tool, but by allowing us to ask new questions, and by providing new partners. Not only academicians, but patients. Sites such as http://www.patientslikeme.com allow patients and families with ALS to partner in information, and look at subtypes, progression, mainstream and alternative therapies, etc. Databases and tracking devices can be mined by patients to help them make decisions. Discussions are underway to bring this to epilepsy. It is time for doctors and patients to work together to advance knowledge. For both professional and lay efforts, one challenge will be the quality of the data (who checks that it was entered correctly?, who checks that it is correct?). There are endless questions of methodology— accuracy of classification of seizures and seizure counts, inclusion of nonepileptic patients, compliance, classification of lesions, heterogeneity of lesions (cavernomas may be more grey or white matter based, may have bled minimally or moderately, occur in sensory or limbic or frontal or temporal regions, be single or multiple, occur with or without other structural problems, etc.). Large numbers may address these issues of data quality. As noted in a Wikipedia entry on signal noise, “In science… noise is fluctuations in (and the addition of external factors to) the stream of target information (signal) being received at a detector.” As the number of data points increases, the probability of discerning signals through noise increases. The opportunity to gather large numbers is a core strength of the Internet. Still, only random noise eliminates itself with numbers. Accuracy and consistency of data and classification will still matter if errors show consistent bias. Large numbers will help address small sample size and loss of power with subgroup analyses, but they will not address all problems with the accuracy and consistency of the data that is entered. And therein lies the rub, the challenge to making this work. Bergin and colleagues' suggestion should be pursued. However, a central oversight and monitoring committee that guides evolution will be critical to success in this endeavor as they have been in the success of Wikipedia itself. We must start with version 1.0 although the fruits will likely lie in subsequent versions.
Read full abstract