Abstract

For a review journal like The Lancet Neurology, it is rather worrying that the results of a cohort analysis published recently in the BMJ suggest that review articles are not as accurate and reliable as most clinicians think (BMJ 2003; 327: 266). The authors analysed reviews of treatment for type II diabetes and concluded that doctors who rely on reviews may be misled. Indeed, the author of an accompanying commentary wrote: “We should perhaps question why these expert reviews continue to be published, given both the lack of rigour and their apparent lack of influence”.In an ideal world, clinicians would read every relevant primary research paper, critically appraise the data, and form their own opinions. However, with the amount of published primary research data increasing by the week, there is little doubt that it is impossible for even the most conscientious of students, researchers, or clinicians to keep up. Review journals, with their highly cited reviews aimed at a general audience and written by experts in the field, are an invaluable resource for helping the busy clinician to keep up-to-date with advances in clinical practice and provide a general overview of developments outside their area of specialist interest.But how can we ensure that review articles are accurate, have included all the relevant studies, and are not biased by the personal opinions of the author(s)? The Lancet Neurology insists that all authors provide a search strategy and selection criteria describing how they identified references for inclusion in their review. We also ask our authors for a conflict of interest statement so that readers are aware of any potential bias. But most of all, we rely on expert peer reviewers to help us identify imbalance and inconsistency in the articles that are submitted to the journal. Editors cannot possibly have comprehensive knowledge in all areas and the comments of referees are therefore an invaluable guide.The practice of peer review first started around 300 years ago, but it is only relatively recently (since the late 1980s) that the process has been questioned. As was once pointed out in the correspondence pages of The Lancet, “How do you know that [your] system of peer review would be any better than no review at all?” In other words, why use a system that has not been scientifically proven to be effective? Earlier this year, the Cochrane Collaboration published a systematic review investigating the role of editorial peer review for improving the quality of reports of biomedical studies (Cochrane Library, Issue 2, 2003. Oxford: Update Software). The authors identified 135 reports of studies, 21 of which fulfilled their inclusion criteria. By analysing the results of these studies, the authors concluded that blinding (of referees or authors), training of referees, and electronic communication media had no effect on the quality of the peer-review process. However, they did find that editorial peer review improved the general quality and readability of the final product.Hot on the heels of this report, The Royal Society in the UK has recently set up a working group—which includes researchers, publishers, and journalists—to investigate how best to communicate the results of scientific studies to the public. As part of this investigation, the working group will address alternatives to peer review, “open” peer review (ie, the authors know who has refereed their paper), and the public understanding of peer review. The group plans to release its findings early next year as two reports: a set of best practice guidelines for researchers and publishers and a “Science Brief” for the public.Although the role of peer review in the selection of primary research papers for publication has been questioned, little, if anything, is known about whether peer review improves the quality of review articles. Despite this lack of evidence, all review articles submitted to The Lancet Neurology, commissioned and spontaneous alike, are sent out to between two and four referees with a set of instructions to guide them on what to look out for. We realise that this system is not perfect, but we firmly believe that peer review improves the review articles we publish.Perhaps unsurprisingly, unlike the authors of the recent BMJ article, we consider review articles to be an invaluable resource for the communication of advances in research and treatment to clinicians. However, review articles can only adequately fulfil this purpose if they are well-written, adequately peer reviewed, and carefully edited to ensure that they are accurate and reliable sources of information. For a review journal like The Lancet Neurology, it is rather worrying that the results of a cohort analysis published recently in the BMJ suggest that review articles are not as accurate and reliable as most clinicians think (BMJ 2003; 327: 266). The authors analysed reviews of treatment for type II diabetes and concluded that doctors who rely on reviews may be misled. Indeed, the author of an accompanying commentary wrote: “We should perhaps question why these expert reviews continue to be published, given both the lack of rigour and their apparent lack of influence”. In an ideal world, clinicians would read every relevant primary research paper, critically appraise the data, and form their own opinions. However, with the amount of published primary research data increasing by the week, there is little doubt that it is impossible for even the most conscientious of students, researchers, or clinicians to keep up. Review journals, with their highly cited reviews aimed at a general audience and written by experts in the field, are an invaluable resource for helping the busy clinician to keep up-to-date with advances in clinical practice and provide a general overview of developments outside their area of specialist interest. But how can we ensure that review articles are accurate, have included all the relevant studies, and are not biased by the personal opinions of the author(s)? The Lancet Neurology insists that all authors provide a search strategy and selection criteria describing how they identified references for inclusion in their review. We also ask our authors for a conflict of interest statement so that readers are aware of any potential bias. But most of all, we rely on expert peer reviewers to help us identify imbalance and inconsistency in the articles that are submitted to the journal. Editors cannot possibly have comprehensive knowledge in all areas and the comments of referees are therefore an invaluable guide. The practice of peer review first started around 300 years ago, but it is only relatively recently (since the late 1980s) that the process has been questioned. As was once pointed out in the correspondence pages of The Lancet, “How do you know that [your] system of peer review would be any better than no review at all?” In other words, why use a system that has not been scientifically proven to be effective? Earlier this year, the Cochrane Collaboration published a systematic review investigating the role of editorial peer review for improving the quality of reports of biomedical studies (Cochrane Library, Issue 2, 2003. Oxford: Update Software). The authors identified 135 reports of studies, 21 of which fulfilled their inclusion criteria. By analysing the results of these studies, the authors concluded that blinding (of referees or authors), training of referees, and electronic communication media had no effect on the quality of the peer-review process. However, they did find that editorial peer review improved the general quality and readability of the final product. Hot on the heels of this report, The Royal Society in the UK has recently set up a working group—which includes researchers, publishers, and journalists—to investigate how best to communicate the results of scientific studies to the public. As part of this investigation, the working group will address alternatives to peer review, “open” peer review (ie, the authors know who has refereed their paper), and the public understanding of peer review. The group plans to release its findings early next year as two reports: a set of best practice guidelines for researchers and publishers and a “Science Brief” for the public. Although the role of peer review in the selection of primary research papers for publication has been questioned, little, if anything, is known about whether peer review improves the quality of review articles. Despite this lack of evidence, all review articles submitted to The Lancet Neurology, commissioned and spontaneous alike, are sent out to between two and four referees with a set of instructions to guide them on what to look out for. We realise that this system is not perfect, but we firmly believe that peer review improves the review articles we publish. Perhaps unsurprisingly, unlike the authors of the recent BMJ article, we consider review articles to be an invaluable resource for the communication of advances in research and treatment to clinicians. However, review articles can only adequately fulfil this purpose if they are well-written, adequately peer reviewed, and carefully edited to ensure that they are accurate and reliable sources of information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call