Abstract

By the end of the twentieth century, Bogardus, Concato and Feinstein [[1]Bogardus Jr., S.T. Concato J. Feinstein A.R. Clinical epidemiological quality in molecular genetic research: the need for methodological standards.JAMA. 1999; 281: 1919-1926Crossref PubMed Scopus (162) Google Scholar] published an impressive article that confronted the genetic research community with an inconvenient truth. They showed that despite major laboratory advances in molecular genetic analysis, reported applications in clinical journals often had troubling omissions, deficiencies, and lack of attention to necessary principles of clinical epidemiological science. They concluded that without suitable attention to fundamental methodological standards, the expected benefits of molecular genetic testing may not be achieved [[1]Bogardus Jr., S.T. Concato J. Feinstein A.R. Clinical epidemiological quality in molecular genetic research: the need for methodological standards.JAMA. 1999; 281: 1919-1926Crossref PubMed Scopus (162) Google Scholar]. Some years later later, in this journal, Attia et al [[2]Attia J. Thakkinstian A. D’Este C. Meta-analyses of molecular association studies: methodologic lessons for genetic epidemiology.J Clin Epidemiol. 2003; 56: 297-303Abstract Full Text Full Text PDF PubMed Scopus (281) Google Scholar], based on meta-analyses, showed that also in population-based molecular association studies too little attention had been paid to methodology, and they emphasized the need for greater communication between epidemiologists and geneticists to develop methods appropriate to this area. These observations and messages have been taken seriously by leading representatives of the research disciplines working in the field. For example, in 2005, McShane et al [[3]McShane L.M. Altman D.G. Sauerbrei W. Taube S.E. Gion M. Clark G.M. Statistics Subcommittee of the NCI-EORTC Working Group on Cancer Diagnostics. REporting recommendations for tumour MARKer prognostic studies (REMARK).Eur J Cancer. 2005; 41: 1690-1696Abstract Full Text Full Text PDF PubMed Scopus (245) Google Scholar], on behalf of the Statistics Subcommittee of the NCI-EORTC Working Group on Cancer Diagnostics, published REporting recommendations for tumour MARKer prognostic studies (REMARK), with the goal ‘to encourage transparent and complete reporting tumor marker studies so that the relevant information will be available to others to help them to judge the usefulness of the data and understand the context in which the conclusions apply.’ And 4 years ago, Ransohoff [[4]Ransohoff D.F. How to improve reliability and efficiency of research about molecular markers: roles of phases, guidelines, and study design.J Clin Epidemiol. 2007; 60: 1205-1219Abstract Full Text Full Text PDF PubMed Scopus (136) Google Scholar] analyzed the state of affairs of research about molecular markers and recommended to better address issues in study design, reliability, and efficiency. In 2009, the STREGA statement (Strengthening the reporting of genetic association studies), an extension of the STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) was published as a reference for authors [[5]Little J. Higgins J.P. Ioannidis J.P. et al.Strengthening the reporting of genetic association studies (STREGA): an extension of the strengthening the reporting of observational studies in epidemiology (STROBE) statement.J Clin Epidemiol. 2009; 62: 597-608Abstract Full Text Full Text PDF PubMed Scopus (88) Google Scholar]. In this issue, Janssens et al present the GRIPS statement on strenghtening the reporting of Genetic Risk Prediction Studies, and – as a web supplement – an extensive explanation and elaboration document to support the statement. The aim of the GRIPS statement, as previous reporting statements prepared by a multisciplinary group of experts, is ‘to enhance the transparency of study reporting, and thereby to improve the synthesis and application of information of multiple studies that might differ in designs, conduct, or analysis.’ The fact that it builds on the principles established by previously reported guidelines adds to the necessary coherence within and overview on the increasing number of reporting guidelines that are relevant for clinical research [[6]Vandenbroucke J.P. STREGA, STROBE, STARD, SQUIRE, MOOSE, PRISMA, GNOSIS, TREND, ORION, COREQ, QUOROM, REMARK. and CONSORT: for whom does the guideline toll?.J Clin Epidemiol. 2009; 62: 594-596Abstract Full Text Full Text PDF PubMed Scopus (93) Google Scholar].Of course, reporting is not the same as having really done all that was in fact needed and, as also stated by the authors, reporting guidelines do not prescribe how studies should be designed, conducted, and analyzed. But transparancy, public accountability, and better justification of design choices, in combination with related editorial policies of journals, may be expected to also have a preventive effect, so that studies will inevitably improve indirectly as a result of reporting guidelines. In an accompanying commentary, Souren and Zeegers emphasize the importance of the GRIPS statement, expecting that it will help researchers to write clearer papers. They also discuss show we should deal nowadays with the Hardy-Weinberg equilibrium (HWE) ‘law’ and call for reporting more detailed information on HWE from individual studies.The importance and actual practice of using reporting guidelines is also demonstrated from two other perspectives. First, in the review of Benchimal et al, using a checklist derived from the Standards for Reporting of Diagnostic Accuracy (STARD), they showed that the quality of studies validating health states in administrative data varies and they advise that use of a reporting checklist could improve the quality of reporting of such validation studies. And second, in a research letter to the editors, Sarangi and Medhi point to the progress of clinical research in India, both qualitatively and quantitatively, presenting compact information on adherence to the CONSORT guidelines and from the Clinical Trials Registry of India.Prediction research is also addressed more extensively. Keogh and co-workers, aiming to establish a web-based register of primary care cinical prediction rules (CPRs) to be made publicly available through the Cochrane Primary Health Care field, focus now on optimising the retrieval of such rules from MEDLINE. They compare five search strategies, including one combining elements of other ones. It is concluded that, given the importance of high sensitivity for this purpose, the novel use of text word searching with inclusion terms was the most appropriate approach for updating a register of primary care CPRs. In view of the various challenges and pitfalls in CPR development [[7]Knottnerus J.A. Diagnostic prediction rules: principles, requirements and pitfalls.Prim Care. 1995; 22: 341-363PubMed Google Scholar], in future, as the authors emphasize, indexing registered CPRs according to for example, clinical domain, methodological quality, level of evidence, and setting, will also be an important step.Rizos et al make an important contribution to strengthening comparative effectiveness research and to improving the possibility to compare all relevant clinical regimens instead of only two. In a review based on MEDLINE and Cochrane Library databases, they investigated the trial networks for antifungals, and showed that specific comparisons are preferred and others avoided. This finding points at a potentially biased research agenda, yielding evidence that may be misleading even if the results of the conducted trials are accurate.In observational research on drug use, which is sometimes also used as an indicator of health status, it is important to check the validity of reported medication. Therefore, in the context of the Norwegian Mother and Child Cohort Study, Furu et al compared maternal reports of children’s use of anti-asthmatics and the presence of asthma with data on dispensed anti-asthmatics. It was concluded that mother-reported use of anti-asthmatics was highly accurate, and that, at the other hand, dispensed anti-asthmatics would be a useful proxy for the presence of current asthma. This outcome is helpful in planning epidemiologic studies on asthma.Classification of study designs is an important step in reviewing and synthesizing the relevant scientific literature as a basis for decision making. To support this process, Hartling and co-authors identified available tools to make such classifications, and developed and tested a new tool after modification of an earlier instrument of the Cochrane Collaboration. Testing showed moderate reliability and low accuracy. The authors present explanations of this result and advise how to use and improve the tool. In connection to this study, it is interesting to read the research letter by Bouchard et al, who used the available appraisal tools to compare the methodological quality of quantitative reviews and mixex-methods reviews. Based on their findings, they make a plea for consensus on the standards required for reporting on the quality of mixed-methods reviews.Irrespective of study design, improving participation of study subjects in in clinical research remains challenging, and there is always room for improvement. Dunlop et al focused on a group that is disproportionally affected by chronic illness but is underrepresented in clinical research: African Americans. They found that preconsent education improved their willingness to participate in hypothetical clinical studies, but this should be further examined in the context of a clinical trial that is actively enrolling patients.As has been advocated before in this journal, improving graphical representation is an issue of great interest. This has been addressed by Becker and his team, who apply graphical modeling to describe associations between between different areas of functioning in head and neck cancer (HNC) patients. They infer that this could be the basis for better understanding of the functioning and for improving the rehabilitation of HNC patients.Speaking about better understanding, it is always good that attention is being paid to education in evidence-based medicine. This topic is addressed in a letter by Zwolsman et al, who developed and examined a Dutch version of the Berlin questionnaire for measuring EBM knowledge and skills [[8]Fritsche L. Greenhalgh T. Falck-Ytter Y. et al.Do short courses in evidence based medicine improve knowledge and skills? Validation of Berlin questionnaire and before and after study of courses in evidence based medicine.BMJ. 2002; 325: 1338-1341Crossref PubMed Scopus (251) Google Scholar]. By the end of the twentieth century, Bogardus, Concato and Feinstein [[1]Bogardus Jr., S.T. Concato J. Feinstein A.R. Clinical epidemiological quality in molecular genetic research: the need for methodological standards.JAMA. 1999; 281: 1919-1926Crossref PubMed Scopus (162) Google Scholar] published an impressive article that confronted the genetic research community with an inconvenient truth. They showed that despite major laboratory advances in molecular genetic analysis, reported applications in clinical journals often had troubling omissions, deficiencies, and lack of attention to necessary principles of clinical epidemiological science. They concluded that without suitable attention to fundamental methodological standards, the expected benefits of molecular genetic testing may not be achieved [[1]Bogardus Jr., S.T. Concato J. Feinstein A.R. Clinical epidemiological quality in molecular genetic research: the need for methodological standards.JAMA. 1999; 281: 1919-1926Crossref PubMed Scopus (162) Google Scholar]. Some years later later, in this journal, Attia et al [[2]Attia J. Thakkinstian A. D’Este C. Meta-analyses of molecular association studies: methodologic lessons for genetic epidemiology.J Clin Epidemiol. 2003; 56: 297-303Abstract Full Text Full Text PDF PubMed Scopus (281) Google Scholar], based on meta-analyses, showed that also in population-based molecular association studies too little attention had been paid to methodology, and they emphasized the need for greater communication between epidemiologists and geneticists to develop methods appropriate to this area. These observations and messages have been taken seriously by leading representatives of the research disciplines working in the field. For example, in 2005, McShane et al [[3]McShane L.M. Altman D.G. Sauerbrei W. Taube S.E. Gion M. Clark G.M. Statistics Subcommittee of the NCI-EORTC Working Group on Cancer Diagnostics. REporting recommendations for tumour MARKer prognostic studies (REMARK).Eur J Cancer. 2005; 41: 1690-1696Abstract Full Text Full Text PDF PubMed Scopus (245) Google Scholar], on behalf of the Statistics Subcommittee of the NCI-EORTC Working Group on Cancer Diagnostics, published REporting recommendations for tumour MARKer prognostic studies (REMARK), with the goal ‘to encourage transparent and complete reporting tumor marker studies so that the relevant information will be available to others to help them to judge the usefulness of the data and understand the context in which the conclusions apply.’ And 4 years ago, Ransohoff [[4]Ransohoff D.F. How to improve reliability and efficiency of research about molecular markers: roles of phases, guidelines, and study design.J Clin Epidemiol. 2007; 60: 1205-1219Abstract Full Text Full Text PDF PubMed Scopus (136) Google Scholar] analyzed the state of affairs of research about molecular markers and recommended to better address issues in study design, reliability, and efficiency. In 2009, the STREGA statement (Strengthening the reporting of genetic association studies), an extension of the STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) was published as a reference for authors [[5]Little J. Higgins J.P. Ioannidis J.P. et al.Strengthening the reporting of genetic association studies (STREGA): an extension of the strengthening the reporting of observational studies in epidemiology (STROBE) statement.J Clin Epidemiol. 2009; 62: 597-608Abstract Full Text Full Text PDF PubMed Scopus (88) Google Scholar]. In this issue, Janssens et al present the GRIPS statement on strenghtening the reporting of Genetic Risk Prediction Studies, and – as a web supplement – an extensive explanation and elaboration document to support the statement. The aim of the GRIPS statement, as previous reporting statements prepared by a multisciplinary group of experts, is ‘to enhance the transparency of study reporting, and thereby to improve the synthesis and application of information of multiple studies that might differ in designs, conduct, or analysis.’ The fact that it builds on the principles established by previously reported guidelines adds to the necessary coherence within and overview on the increasing number of reporting guidelines that are relevant for clinical research [[6]Vandenbroucke J.P. STREGA, STROBE, STARD, SQUIRE, MOOSE, PRISMA, GNOSIS, TREND, ORION, COREQ, QUOROM, REMARK. and CONSORT: for whom does the guideline toll?.J Clin Epidemiol. 2009; 62: 594-596Abstract Full Text Full Text PDF PubMed Scopus (93) Google Scholar]. Of course, reporting is not the same as having really done all that was in fact needed and, as also stated by the authors, reporting guidelines do not prescribe how studies should be designed, conducted, and analyzed. But transparancy, public accountability, and better justification of design choices, in combination with related editorial policies of journals, may be expected to also have a preventive effect, so that studies will inevitably improve indirectly as a result of reporting guidelines. In an accompanying commentary, Souren and Zeegers emphasize the importance of the GRIPS statement, expecting that it will help researchers to write clearer papers. They also discuss show we should deal nowadays with the Hardy-Weinberg equilibrium (HWE) ‘law’ and call for reporting more detailed information on HWE from individual studies. The importance and actual practice of using reporting guidelines is also demonstrated from two other perspectives. First, in the review of Benchimal et al, using a checklist derived from the Standards for Reporting of Diagnostic Accuracy (STARD), they showed that the quality of studies validating health states in administrative data varies and they advise that use of a reporting checklist could improve the quality of reporting of such validation studies. And second, in a research letter to the editors, Sarangi and Medhi point to the progress of clinical research in India, both qualitatively and quantitatively, presenting compact information on adherence to the CONSORT guidelines and from the Clinical Trials Registry of India. Prediction research is also addressed more extensively. Keogh and co-workers, aiming to establish a web-based register of primary care cinical prediction rules (CPRs) to be made publicly available through the Cochrane Primary Health Care field, focus now on optimising the retrieval of such rules from MEDLINE. They compare five search strategies, including one combining elements of other ones. It is concluded that, given the importance of high sensitivity for this purpose, the novel use of text word searching with inclusion terms was the most appropriate approach for updating a register of primary care CPRs. In view of the various challenges and pitfalls in CPR development [[7]Knottnerus J.A. Diagnostic prediction rules: principles, requirements and pitfalls.Prim Care. 1995; 22: 341-363PubMed Google Scholar], in future, as the authors emphasize, indexing registered CPRs according to for example, clinical domain, methodological quality, level of evidence, and setting, will also be an important step. Rizos et al make an important contribution to strengthening comparative effectiveness research and to improving the possibility to compare all relevant clinical regimens instead of only two. In a review based on MEDLINE and Cochrane Library databases, they investigated the trial networks for antifungals, and showed that specific comparisons are preferred and others avoided. This finding points at a potentially biased research agenda, yielding evidence that may be misleading even if the results of the conducted trials are accurate. In observational research on drug use, which is sometimes also used as an indicator of health status, it is important to check the validity of reported medication. Therefore, in the context of the Norwegian Mother and Child Cohort Study, Furu et al compared maternal reports of children’s use of anti-asthmatics and the presence of asthma with data on dispensed anti-asthmatics. It was concluded that mother-reported use of anti-asthmatics was highly accurate, and that, at the other hand, dispensed anti-asthmatics would be a useful proxy for the presence of current asthma. This outcome is helpful in planning epidemiologic studies on asthma. Classification of study designs is an important step in reviewing and synthesizing the relevant scientific literature as a basis for decision making. To support this process, Hartling and co-authors identified available tools to make such classifications, and developed and tested a new tool after modification of an earlier instrument of the Cochrane Collaboration. Testing showed moderate reliability and low accuracy. The authors present explanations of this result and advise how to use and improve the tool. In connection to this study, it is interesting to read the research letter by Bouchard et al, who used the available appraisal tools to compare the methodological quality of quantitative reviews and mixex-methods reviews. Based on their findings, they make a plea for consensus on the standards required for reporting on the quality of mixed-methods reviews. Irrespective of study design, improving participation of study subjects in in clinical research remains challenging, and there is always room for improvement. Dunlop et al focused on a group that is disproportionally affected by chronic illness but is underrepresented in clinical research: African Americans. They found that preconsent education improved their willingness to participate in hypothetical clinical studies, but this should be further examined in the context of a clinical trial that is actively enrolling patients. As has been advocated before in this journal, improving graphical representation is an issue of great interest. This has been addressed by Becker and his team, who apply graphical modeling to describe associations between between different areas of functioning in head and neck cancer (HNC) patients. They infer that this could be the basis for better understanding of the functioning and for improving the rehabilitation of HNC patients. Speaking about better understanding, it is always good that attention is being paid to education in evidence-based medicine. This topic is addressed in a letter by Zwolsman et al, who developed and examined a Dutch version of the Berlin questionnaire for measuring EBM knowledge and skills [[8]Fritsche L. Greenhalgh T. Falck-Ytter Y. et al.Do short courses in evidence based medicine improve knowledge and skills? Validation of Berlin questionnaire and before and after study of courses in evidence based medicine.BMJ. 2002; 325: 1338-1341Crossref PubMed Scopus (251) Google Scholar].

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call