Abstract

This editorial summarizes recent changes in the Journal’s structured abstract and related requirements for authors and reviewers. It is intended, in part, to apprise our readers of what goes on behind the scenes to ensure the highest possible quality and scientific validity of manuscripts published by the Journal. It also aims to reinforce our readers’ appreciation of the efforts of authors, reviewers, and our Editorial Board toward providing a continuing supply of evidence-based clinical information.Ophthalmology began requiring an abstract after transitioning to a peer-reviewed journal in 1978 and instituted a four-part structured abstract with sections on purpose, methods, results, and conclusions in January of 1992.1Lichter P.R. Structured abstracts now required for all submission to the Journal.Ophthalmology. 1991; 98 ([Editorial]): 1611-1612Abstract Full Text PDF Scopus (1) Google Scholar Transition to a seven-part structured abstract, including additional sections on study design, participants/controls, and main outcome measures, was begun in late 1995 (Table 1). By May 1998, approximately 85% of Journal abstracts were in compliance. The new requirements were based on consensus recommendations of the Journal of the American Medical Association (JAMA) and my agreement that an expanded abstract format would substantially improve readers’ ability to rapidly assess the nature and quality of the study being described.2Haynes R.B. Mulrow C.D. Huth E.J. et al.More informative abstracts revisited.Ann Intern Med. 1990; 113: 69-76Crossref PubMed Scopus (288) Google Scholar The new requirement for identifying the study design, using standard terms or phrases, resulted initially in remarkably varied responses from authors and led to the obvious conclusion that we needed to define an acceptable study design terminology for use in our abstracts. The limited types of manuscripts published by Ophthalmology (Table 2)made this seemingly daunting task quite feasible.Table 1Ophthalmology’s Required Structured Abstract Sections: [see Instructions for Authors, July, 1998 issue or www.eyenet.org/ophthalmology for additional explanation, definitions] (maximum 350 words total)Objective/Purpose: (States the goals or reasons for performing the study)Design: (Designates the type of study using a few words or a phrase, such as randomized controlled trial, nonrandomized comparative trial, etc. [see Study Design Scheme] and modifiers such as prospective, retrospective, multicenter etc. as appropriate)Participants/Controls: (Indicates the numbers of participants or eyes and control subjects)Intervention/Methods/Testing: (Describes the principal surgical or nonsurgical treatments, tests or procedures utilized during the study)Main Outcome Measures: (Indicates the primary and/or secondary outcome measurements assessed during the study, such as visual acuity, intraocular pressure, infection rate, using single words or phrases)Results: (Summarizes the data accumulated during the study)Conclusions: (States and interprets the most important conclusions derived from the study data) Open table in a new tab Table 2Ophthalmology’s Study Design Scheme: [# corresponds to available worksheet]I. Clinical Interventional Studies (Clinical Trials):II. Observational Studies:A. Comparative Trials:A. Case-control Study (#5)∗∗1. Randomized controlled trial (#1)B. Cross-sectional Study (#6)∗∗∗2. Nonrandomized comparative trial (#2)∗C. Cohort Study (#7)∗∗∗∗B. Noncomparative Case Series (#3)D. Case Series (#8)C. Interventional Case Report (#4)E. Observational Case Report (#9)∗May include: 1. prospective study with concurrent control group; 2. prospective study with nonconcurrent control group; 3. retrospective study with concurrent control group; 4. Retrospective study with nonconcurrent control group.∗∗An observational (noninterventional, usually retrospective) study that begins by identifying individuals with a disease (cases) for comparison to individuals without a disease (controls). The research typically proceeds from effect to cause.∗∗∗An observational study that identifies individuals with and without the condition being studied in a defined population at the same point in time (synonymous with prevalence study); may or may not be population-based.∗∗∗∗An observational study that begins by identifying individuals with (study group) and without (control group) a factor being investigated. Study and control groups may be concurrent or nonconcurrent; almost always prospective and longitudinal with regard to data collection; may or may not be population-based.III. Other Study Types Published by Opthalmology:A. Systematic Literature Review and Meta-analysis (#10)B. Experimental Study (#11)C. Reviews♣D. Historical Manuscript♣♣Worksheets not necessary Open table in a new tab To facilitate standardization of study design terms for Ophthalmology, the Editorial Board has developed a study design scheme (Table 2) and a related glossary, available on the Home Page (www.eyenet.org/ophthalmology) and by mail or fax from the editorial office. The scheme allows the majority of papers we receive to be reasonably accurately categorized. Obviously, when manuscripts include mixed study types, more than one category may apply. In any case, the scheme includes the categories of interventional or observational case series and experimental studies, classifications flexible enough to include most manuscripts not otherwise easily labeled as to study design.As a test of the scheme and how it might apply to papers published over the past decade in Ophthalmology, all 541 papers appearing in the Journal during four 6- month periods (January through June, 1987, 1990, 1994, and 1998) were reviewed and categorized. Analysis of the study designs used in this sample indicates that increasing percentages of comparative studies (randomized and nonrandomized comparative trials, case-control, and cohort studies), compared to noncomparative ones (noncomparative and observational case series and case reports) have appeared. As a test of reproducibility, the 28 articles in the May 1998 issue of the Journal were categorized on two separate occasions by the Editor, resulting in virtually identical classifications of study designs employed.During the years 1995–1997, for which detailed coding of all manuscript types is available, 66%, 73%, and 78%, respectively, of Ophthalmology’s regular manuscripts have described medium (n = 11 to 30) or large studies (n > 30), including those in which data were collected both “retrospectively” (data collected/analyzed after all measurements, observations, interventions or events) and “prospectively” (data collected during measurements, observations, interventions, or events according to predetermined protocols) (see Journal Report Card section of the Home Page).Also in 1996, the Journal began requiring authors to submit a consolidated standard of reporting trials (CONSORT) worksheet to accompany manuscripts describing randomized controlled trials (RCTs).3Begg C. Cho M. Eastwood S. et al.Improving the quality of reporting of randomized controlled trials the CONSORT statement.JAMA. 1996; 276: 637-639Crossref PubMed Google Scholar The customized CONSORT worksheet for Ophthalmology currently lists 36 items to be identified as present or not present by page number, including, for example, descriptions of inclusion and exclusion criteria for study patients, how sample size was determined, and details of the randomization process, all customarily expected in RCT manuscripts. After requiring the CONSORT worksheet through May 1998, the Journal received 86 manuscripts and accompanying worksheets describing RCTs, which were subsequently included in the packets sent to reviewers. Authors have not objected to the extra work involved in completing the worksheets, and the great majority of reviewers who have returned comments have found the worksheets to be helpful in the review process. In effect, the worksheet provides a handy checklist for authors and reviewers to use in organizing manuscripts and review comments.My subjective impression is that the RCT worksheets have, as intended, stimulated more thorough and more constructive reviews. I am especially encouraged by our experience to date because review requests for RCTs are usually sent to individuals with expertise in study design, including biostatisticians and epidemiologists, the least likely among our reviewers to need a reminder as to what issues should be discussed in this type of manuscript.4Meinert C.L. Clinical trials the gold standard for evaluation of therapy.Ophthalmology. 1996; 103 ([Editorial]): 869-870Abstract Full Text PDF PubMed Scopus (9) Google Scholar RCT manuscripts, of course, tend to come from large institutions or groups, and such studies have usually been organized, conducted, and analyzed with input by experts in clinical research and study design, rendering completion of the worksheet a relatively simple task.Intrigued by the possibility that worksheets similar to those for RCTs would be helpful to authors and reviewers of other types of studies and ultimately improve the quality of published papers, our Editorial Board agreed to develop additional worksheets. To date, 11 worksheets have evolved (Table 2). It is assumed that manuscripts describing historical subjects and topic reviews would not benefit from worksheets because they are customarily less rigidly structured than are manuscripts describing data from current clinical or animal research. The study design scheme and related worksheets are intended to aid authors in the organization of manuscripts, and reviewers in their evaluation of the scientific validity of the research described. The scheme is intended to apply only to the study types received by the Journal, and its hierarchical organization is not meant to imply the order of scientific importance of various study types. The scheme by its organization, however, does emphasize the importance of “controls,” by definition a necessary aspect of RCTs, comparative trials, case-control, and cohort studies. Please see the Appendix to the Instructions for Authors in this issue, which includes all our current study design worksheets. Individual study design worksheets may be downloaded from the Home Page or obtained from the Journal office.Completion of the new worksheets by authors of studies other than RCTs has begun on a voluntary basis and will be carefully monitored to determine whether the process is constructive. It is expected that author-reviewer feedback will result in continual evolution of the worksheets and their application.5Meinert C.L. Beyond CONSORT need for improved reporting standards for clinical trials.JAMA. 1998; 279 ([Editorial]): 1487-1489Crossref PubMed Scopus (53) Google Scholar, 6Moher D. CONSORT an evolving tool to help improve the quality of reports of randomized controlled trials.JAMA. 1989; 279 ([Editorial]): 1489-1491Crossref Scopus (183) Google Scholar Only time and comparisons of large numbers of published papers before and after institution of this process will demonstrate whether it has positively affected the overall quality of our published papers. At the very least, the worksheets will expose authors and reviewers to a scientifically rigorous methodology for organizing and evaluating manuscripts, incorporating current expert opinion as to what items should be included to meet desirable standards for various study designs. I anticipate that manuscripts describing noncomparative interventional or observational case series (Table 2), together constituting 30% to 50% of our published papers, will benefit particularly from more standardized organization.In summary, ongoing changes in the Journal format and processes related to peer review are intended to improve the veracity, readability, and utility of the final product: valid scientific information. The continually increasing sophistication of our science demands parallel improvement in communication techniques. It is hoped that Ophthalmology’s authors, reviewers, and readers will acknowledge and appreciate the benefits of more attention to study design. Feedback by authors, reviewers, and readers is welcome. This editorial summarizes recent changes in the Journal’s structured abstract and related requirements for authors and reviewers. It is intended, in part, to apprise our readers of what goes on behind the scenes to ensure the highest possible quality and scientific validity of manuscripts published by the Journal. It also aims to reinforce our readers’ appreciation of the efforts of authors, reviewers, and our Editorial Board toward providing a continuing supply of evidence-based clinical information. Ophthalmology began requiring an abstract after transitioning to a peer-reviewed journal in 1978 and instituted a four-part structured abstract with sections on purpose, methods, results, and conclusions in January of 1992.1Lichter P.R. Structured abstracts now required for all submission to the Journal.Ophthalmology. 1991; 98 ([Editorial]): 1611-1612Abstract Full Text PDF Scopus (1) Google Scholar Transition to a seven-part structured abstract, including additional sections on study design, participants/controls, and main outcome measures, was begun in late 1995 (Table 1). By May 1998, approximately 85% of Journal abstracts were in compliance. The new requirements were based on consensus recommendations of the Journal of the American Medical Association (JAMA) and my agreement that an expanded abstract format would substantially improve readers’ ability to rapidly assess the nature and quality of the study being described.2Haynes R.B. Mulrow C.D. Huth E.J. et al.More informative abstracts revisited.Ann Intern Med. 1990; 113: 69-76Crossref PubMed Scopus (288) Google Scholar The new requirement for identifying the study design, using standard terms or phrases, resulted initially in remarkably varied responses from authors and led to the obvious conclusion that we needed to define an acceptable study design terminology for use in our abstracts. The limited types of manuscripts published by Ophthalmology (Table 2)made this seemingly daunting task quite feasible. To facilitate standardization of study design terms for Ophthalmology, the Editorial Board has developed a study design scheme (Table 2) and a related glossary, available on the Home Page (www.eyenet.org/ophthalmology) and by mail or fax from the editorial office. The scheme allows the majority of papers we receive to be reasonably accurately categorized. Obviously, when manuscripts include mixed study types, more than one category may apply. In any case, the scheme includes the categories of interventional or observational case series and experimental studies, classifications flexible enough to include most manuscripts not otherwise easily labeled as to study design. As a test of the scheme and how it might apply to papers published over the past decade in Ophthalmology, all 541 papers appearing in the Journal during four 6- month periods (January through June, 1987, 1990, 1994, and 1998) were reviewed and categorized. Analysis of the study designs used in this sample indicates that increasing percentages of comparative studies (randomized and nonrandomized comparative trials, case-control, and cohort studies), compared to noncomparative ones (noncomparative and observational case series and case reports) have appeared. As a test of reproducibility, the 28 articles in the May 1998 issue of the Journal were categorized on two separate occasions by the Editor, resulting in virtually identical classifications of study designs employed. During the years 1995–1997, for which detailed coding of all manuscript types is available, 66%, 73%, and 78%, respectively, of Ophthalmology’s regular manuscripts have described medium (n = 11 to 30) or large studies (n > 30), including those in which data were collected both “retrospectively” (data collected/analyzed after all measurements, observations, interventions or events) and “prospectively” (data collected during measurements, observations, interventions, or events according to predetermined protocols) (see Journal Report Card section of the Home Page). Also in 1996, the Journal began requiring authors to submit a consolidated standard of reporting trials (CONSORT) worksheet to accompany manuscripts describing randomized controlled trials (RCTs).3Begg C. Cho M. Eastwood S. et al.Improving the quality of reporting of randomized controlled trials the CONSORT statement.JAMA. 1996; 276: 637-639Crossref PubMed Google Scholar The customized CONSORT worksheet for Ophthalmology currently lists 36 items to be identified as present or not present by page number, including, for example, descriptions of inclusion and exclusion criteria for study patients, how sample size was determined, and details of the randomization process, all customarily expected in RCT manuscripts. After requiring the CONSORT worksheet through May 1998, the Journal received 86 manuscripts and accompanying worksheets describing RCTs, which were subsequently included in the packets sent to reviewers. Authors have not objected to the extra work involved in completing the worksheets, and the great majority of reviewers who have returned comments have found the worksheets to be helpful in the review process. In effect, the worksheet provides a handy checklist for authors and reviewers to use in organizing manuscripts and review comments. My subjective impression is that the RCT worksheets have, as intended, stimulated more thorough and more constructive reviews. I am especially encouraged by our experience to date because review requests for RCTs are usually sent to individuals with expertise in study design, including biostatisticians and epidemiologists, the least likely among our reviewers to need a reminder as to what issues should be discussed in this type of manuscript.4Meinert C.L. Clinical trials the gold standard for evaluation of therapy.Ophthalmology. 1996; 103 ([Editorial]): 869-870Abstract Full Text PDF PubMed Scopus (9) Google Scholar RCT manuscripts, of course, tend to come from large institutions or groups, and such studies have usually been organized, conducted, and analyzed with input by experts in clinical research and study design, rendering completion of the worksheet a relatively simple task. Intrigued by the possibility that worksheets similar to those for RCTs would be helpful to authors and reviewers of other types of studies and ultimately improve the quality of published papers, our Editorial Board agreed to develop additional worksheets. To date, 11 worksheets have evolved (Table 2). It is assumed that manuscripts describing historical subjects and topic reviews would not benefit from worksheets because they are customarily less rigidly structured than are manuscripts describing data from current clinical or animal research. The study design scheme and related worksheets are intended to aid authors in the organization of manuscripts, and reviewers in their evaluation of the scientific validity of the research described. The scheme is intended to apply only to the study types received by the Journal, and its hierarchical organization is not meant to imply the order of scientific importance of various study types. The scheme by its organization, however, does emphasize the importance of “controls,” by definition a necessary aspect of RCTs, comparative trials, case-control, and cohort studies. Please see the Appendix to the Instructions for Authors in this issue, which includes all our current study design worksheets. Individual study design worksheets may be downloaded from the Home Page or obtained from the Journal office. Completion of the new worksheets by authors of studies other than RCTs has begun on a voluntary basis and will be carefully monitored to determine whether the process is constructive. It is expected that author-reviewer feedback will result in continual evolution of the worksheets and their application.5Meinert C.L. Beyond CONSORT need for improved reporting standards for clinical trials.JAMA. 1998; 279 ([Editorial]): 1487-1489Crossref PubMed Scopus (53) Google Scholar, 6Moher D. CONSORT an evolving tool to help improve the quality of reports of randomized controlled trials.JAMA. 1989; 279 ([Editorial]): 1489-1491Crossref Scopus (183) Google Scholar Only time and comparisons of large numbers of published papers before and after institution of this process will demonstrate whether it has positively affected the overall quality of our published papers. At the very least, the worksheets will expose authors and reviewers to a scientifically rigorous methodology for organizing and evaluating manuscripts, incorporating current expert opinion as to what items should be included to meet desirable standards for various study designs. I anticipate that manuscripts describing noncomparative interventional or observational case series (Table 2), together constituting 30% to 50% of our published papers, will benefit particularly from more standardized organization. In summary, ongoing changes in the Journal format and processes related to peer review are intended to improve the veracity, readability, and utility of the final product: valid scientific information. The continually increasing sophistication of our science demands parallel improvement in communication techniques. It is hoped that Ophthalmology’s authors, reviewers, and readers will acknowledge and appreciate the benefits of more attention to study design. Feedback by authors, reviewers, and readers is welcome.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call