Novel tools and methods for designing and wrangling multifunctional, machine-readable evidence synthesis databases
One of the most important steps in the process of conducting a systematic review or map is data extraction and the production of a database of coding, metadata and study data. There are many ways to structure these data, but to date, no guidelines or standards have been produced for the evidence synthesis community to support their production. Furthermore, there is little adoption of easily machine-readable, readily reusable and adaptable databases: these databases would be easier to translate into different formats by review authors, for example for tabulation, visualisation and analysis, and also by readers of the review/map. As a result, it is common for systematic review and map authors to produce bespoke, complex data structures that, although typically provided digitally, require considerable efforts to understand, verify and reuse. Here, we report on an analysis of systematic reviews and maps published by the Collaboration for Environmental Evidence, and discuss major issues that hamper machine readability and data reuse or verification. We highlight different justifications for the alternative data formats found: condensed databases; long databases; and wide databases. We describe these challenges in the context of data science principles that can support curation and publication of machine-readable, Open Data. We then go on to make recommendations to review and map authors on how to plan and structure their data, and we provide a suite of novel R-based functions to support efficient and reliable translation of databases between formats that are useful for presentation (condensed, human readable tables), filtering and visualisation (wide databases), and analysis (long databases). We hope that our recommendations for adoption of standard practices in database formatting, and the tools necessary to rapidly move between formats will provide a step-change in transparency and replicability of Open Data in evidence synthesis.
- Peer Review Report
- 10.7554/elife.85679.sa1
- Mar 3, 2023
Decision letter: An umbrella review of systematic reviews on the impact of the COVID-19 pandemic on cancer prevention and management, and patient needs
- Research Article
4
- 10.1186/2047-2382-3-18
- Jan 1, 2014
- Environmental Evidence
Unlike primary research, all reviews, whether systematicor not, have a limited ‘shelf-life’. As new findings of pri-mary research are reported, dated review findings be-come less reliable as an assessment of the best availableevidence. This can be a particular issue when reviews re-port a quantitative synthesis that provides combinedmeans of effects of interventions or impacts of expo-sures. Combined means can be changed substantially bythe addition of findings from a new, large, well designedstudy. This is particularly true when the evidence base isweak as is often the case in environmental management.Equally, new findings may enable a quantitative synthe-sis, when the previously published review may have con-cluded that no quantitative synthesis was possible. Forthese reasons a key component of systematic review(SR) methodology is the commitment to keep SRs up-dated on an appropriate timescale so that they track thedevelopment of the evidence base. Collaboration forEnvironmental Evidence (CEE) guidelines note that outof date SRs can be misleading and suggest an averagetime period of five years for updates [1]. However, sci-ence in different areas of environmental managementadvances at different rates and so five years is very mucha guideline and not a rule.The guidelines go on to suggest that:– ifareviewisfiveormoreyearsoutofdate,theCEEeditorial team will contact the authors inviting themto update the review.– if the authors are unable to take up this invitation,the review will be marked as ‘update sought’ andupdates will be open to any interested party.– in the case that a new review team is formed toupdate a review, they will be expected to liaise asmuch as possible with the original team, who mayalso be named as authors in the updated review toreflect the intellectual input into the review as awhole.Registration of an update, as with a new SR, is throughthe submission of a protocol. The protocol should citethe original protocol and be clear about how the newone differs from (and possibly improves on) the old. Up-dating is also an opportunity to learn from previous ef-forts and to improve methodology. It is therefore notexpected that an update will be a faithful repetition ofthe original. However, changes should be highlightedand explained.The CEE Library of completed reviews (www.environ-mentalevidence.org/Reviews.html) includes over 20 SRsthat are more than five years old and none have so farbeen updated. The majority were conducted as part ofthe process of developing systematic review method-ology for environmental management and could nodoubt be improved in many ways. Updates can thus bein the form of methodology, as reflected in the devel-opment of CEE Guidelines (now at version 4.2), as wellas adding new research findings. Some reviews can beeven more dated than their publication date suggestsas the searches may have been conducted years earlier.The first example of an update in progress has recentlybeen published as a protocol [2] and relates to a CEESR originally published in 2010 in which the searchwas conducted in 2008 [3].Beside updating CEE SRs, an exciting opportunity existsto update other reviews and meta-analyses to meet CEEstandards. In general, published reviews and meta-analysesare of very variable standard [4] and the raising of thesestandards is a key objective of CEE. It seems sensible thento have dual objectives of updating old reviews by addingnew findings and updating methodology and conduct ofthe review to CEE standards. We call on authors of re-views and meta-analyses to consider if it is the right timeto update their review and to register their protocol withCEE. Again, registration would be the same process exceptthat the protocol would likely be entirely new and referonly to the former review article as a basis for the CEE SR.Of course many original authors may not be motivated toperiodically update a review and so I would like toencourage anyone interested in conducting a SR and
- Research Article
23
- 10.1007/s11606-012-2053-1
- May 31, 2012
- Journal of General Internal Medicine
Methods Guide for Authors of Systematic Reviews of Medical Tests: A Collaboration Between the Agency for Healthcare Research and Quality (AHRQ) and the Journal of General Internal Medicine
- Research Article
35
- 10.1111/dmcn.14949
- Jun 6, 2021
- Developmental medicine and child neurology
To evaluate the methodological quality of recent systematic reviews of interventions for children with cerebral palsy in order to determine the level of confidence in the reviews' conclusions. A comprehensive search of 22 databases identified eligible systematic reviews with and without meta-analysis published worldwide from 2015 to 2019. We independently extracted data and used A Measurement Tool to Assess Systematic Reviews-2 (AMSTAR-2) to appraise methodological quality. Eighty-three systematic reviews met strict eligibility criteria. Most were from Europe and Latin America and reported on rehabilitative interventions. AMSTAR-2 appraisal found critically low confidence in 88% (n=73) because of multiple and varied deficiencies. Only 7% (n=6) had no AMSTAR-2 critical domain deficiency. The number of systematic reviews increased fivefold from 2015 to 2019; however, quality did not improve over time. Most of these systematic reviews are considered unreliable according to AMSTAR-2. Current recommendations for treating children with CP based on these flawed systematic reviews need re-evaluation. Findings are comparable to reports from other areas of medicine, despite the general perception that systematic reviews are high-level evidence. The required use of current widely accepted guidance for conducting and reporting systematic reviews by authors, peer reviewers, and editors is critical to ensure reliable, unbiased, and transparent systematic reviews. What this paper adds Confidence was critically low in the conclusions of 88% of systematic reviews about interventions for children with cerebral palsy (CP). Quality issues in the sample were not limited to systematic reviews of non-randomized trials, or to those about certain populations of CP or interventions. The inclusion of meta-analysis did not improve the level of confidence in these systematic reviews. Numbers of systematic reviews on this topic increased over the 5 search years but their methodological quality did not improve.
- Research Article
21
- 10.1111/jebm.12505
- Dec 1, 2022
- Journal of Evidence-Based Medicine
Study within a review (SWAR)
- Research Article
11
- 10.1111/j.1525-1497.2004.41001.x
- Dec 1, 2004
- Journal of General Internal Medicine
A call for systematic reviews
- Research Article
- 10.1186/s13750-024-00329-2
- Mar 29, 2024
- Environmental Evidence
The Environmental Evidence for the Future (EEF) Initiative emerged in response to the challenges and opportunities presented by the UK’s decision to leave the European Union and its associated Environmental Frameworks. The Natural Environment Research Council (NERC), working closely with the Collaboration for Environmental Evidence (CEE) and UK stakeholders, developed the initiative to identify and address crucial evidence gaps, offering a long-term vision for environmental policy and sustainability. The EEF Initiative progressed through three stages: strategic priority identification, NERC panel award selection, and the production of Systematic Maps of existing evidence. The first stage involved collaborative workshops across the UK to identify key knowledge gaps in environmental science. The subsequent prioritisation resulted in 65 challenges across 10 thematic areas. The second stage saw NERC initiating, with CEE support, an open call for research proposals emphasising the use of evidence synthesis methodology. The selection process, balancing topic importance and applicant expertise, led to funding for five projects. The final stage involved the production of Systematic Maps of existing evidence based on the CEE Guidelines and Standards, providing a structured overview of existing literature on specific topics. The EEF Initiative demonstrated effective collaboration between UKRI (NERC), an independent non-profit (CEE), academia, and government agencies, addressing critical environmental challenges through rigorous evidence synthesis methodologies. The programme enhanced understanding and utilisation of these methodologies within the research community. Key lessons include the importance of inclusive priority-setting, differentiation between broad policy questions and specific Systematic Map questions, recognition of the value of Systematic Maps, and the role of experience in evidence synthesis teams. As policymakers and researchers navigate environmental policies in a resource-constrained environment, the EEF Initiative highlights the cost-effectiveness and efficiency of systematic mapping and review processes for evidence-based decision-making. The success of funding through NERC sets a precedent for future thematic evidence focused programmes, emphasising the need for continued support in developing synthesis skills among researchers and encouraging direct government commissions for targeted and responsive evidence. The EEF Initiative serves as a model for effective collaboration, providing valuable insights into addressing evidence gaps and shaping evidence-based policymaking in an ever-evolving environmental landscape.
- Research Article
5
- 10.1542/peds.2021-053852b
- May 1, 2022
- Pediatrics
The age at which children enter school represents a transitional period between early childhood and adolescence that involves increasing autonomy, interaction with peers, and exposure to environments outside the home. Although mortality is generally much lower in the 5 to 9 age group compared with infancy and early childhood, there are many preventable causes of mortality, morbidity, and disability that emerge in this age group, including injuries, noncommunicable diseases, and vaccine-preventable and highly treatable infections.1 Partly because of relatively low mortality rates and less frequent contacts with the health system, school-age children and younger adolescents ages 5 to 14 have been referred to as the “missing middle,” in that there is a dearth of robust data on key health indicators, morbidity burden, and cause-specific mortality in this group.2 Many health issues that have a high burden in early childhood can persist in older children, especially in low- and middle-income countries (LMIC), resource-constrained settings, and marginalized communities worldwide. Undernutrition and infections occurring in the context of poverty remain leading causes of morbidity and mortality in school-age children living in LMIC,3 whereas those children in higher-income settings are more likely to die due to injuries or noncommunicable disease (NCD). In addition, the prevalence of overweight and obesity in children and adolescents has increased steadily over the last few decades,4 though the rate of these increases varies widely among countries.5New risk factors relating to diet, lifestyle, mental health, injuries, and NCDs also become more prominent as children approach and enter adolescence, many of which can contribute to the development of chronic NCDs over the life course. Within this period, school-age children begin to establish healthy lifestyle habits (eg, diet, physical activity, avoidance of substance use), and are learning about sexual and reproductive health and rights, as well as the measures they can take to protect themselves and others. This represents a window of opportunity for educational interventions to support good health, optimal development, and well-being. A growing body of evidence suggests that school-based and digital platforms and delivery strategies are promising tools that aid in the delivery of health interventions to older children.The methodology and reviews described herein contributed to the portion of the upcoming 2022 Lancet Optimizing Child and Adolescent Health and Development Series6 related to school-age child and adolescent health interventions. This Lancet Series is the product of an ongoing academic collaboration involving global child health researchers worldwide, including many who are authors on articles within this supplement. The aim of the specific Lancet Series article citing this supplement is to provide a comprehensive overview of systematic reviews describing the most recent evidence for effective interventions to support maternal, newborn, child, and adolescent health and development from preconception through to 20 years of age.Figure 1 provides an overview of the key child health domains, and a breakdown of the intervention review topics addressing key risk factors covered by the articles included in this journal supplement. On the basis of work done in previous comprehensive overviews of interventions for child and adolescent health (eg, Disease Control Priorities, 3rd edition7; Lancet Adolescent Health Commission8), we identified a comprehensive set of key child health domains that represented priority areas for interventions to address modifiable risks for the major causes of child mortality and morbidity. The factors that informed which domains were covered in this supplement included: conditions with a high global burden of disease, conditions with disproportionate impacts on vulnerable and marginalized populations, potential to support improved human capital development across the life course, and pragmatic considerations including whether the topic had recently been covered elsewhere. In cases where the child health domain was deemed too broad in scope for a single review (eg, infectious diseases), the subtopics for individual reviews were also chosen on the basis of these factors. The age group of specific interest for these reviews was older school-age children (ages 5–9.9), though the period of early adolescence (ages 10–14.9) was also recognized as an important area of overlap and transition. The general outcomes of interest aligned with those chosen through consensus by the Lancet Series working group. These included, but were not limited to, mortality, severe morbidity, disability, growth and development, knowledge and behavior, and indicators of improved human capital development such as academic achievement.The methodological approaches taken, and child health domains covered in this supplement of reviews, was informed by a broad initial literature-scoping and evidence-mapping process to identify key health interventions and associated evidence for their effectiveness in the form of systematic reviews. This was done across all domains, from preconception and pregnancy to ages 0 to 20 to inform the 2022 Lancet Optimizing Child and Adolescent Health and Development Series.6 This involved leveraging existing large-scale intervention overviews (eg, Disease Control Priorities 3rd edition, Lancet Series) that had already highlighted existing effective interventions and the most recent systematic reviews detailing the evidence for their effectiveness. Additional targeted searches for newer interventions and systematic reviews in each domain were also conducted. Through this evidence-mapping process, we explored coverage and extent of LMIC-specific evidence across all child health domains to identify areas where school-age evidence was lacking and determined that there were significant gaps in existing evidence for intervention effectiveness in school-age children.We funneled the reviews identified during this initial scoping process that contained studies covering school-age children and adolescents into the individual reviews for each domain of child health covered in this supplement. We elected to conduct targeted overviews of systematic reviews if there was deemed to be a large body of existing evidence syntheses. In cases where there was a lack of evidence syntheses of intervention effectiveness for a given domain of school-age child health, conventional systematic reviews of primary literature (ie, experimental studies) were conducted. The general methodology for these 2 approaches are described below. See Table 1 and Fig 1 for a summary of the review methods used for each child health domain, and Fig 2 for a breakdown of the main methodology followed in each type of review.For those child health domains that encompassed a variety of intervention types addressing a wide range of risk factors and health conditions, and for which the initial scoping process identified a variety of existing systematic reviews of intervention effectiveness, an overview of systematic reviews was undertaken. This approach was taken to ensure comprehensiveness, reduce duplication of review efforts, and make the review process feasible.In addition to incorporating those relevant reviews previously identified in the initial literature-scoping and evidence-mapping exercise, tailored searches were executed in several databases (eg, Medline, Cochrane Database of Systematic Reviews, Campbell Library) to identify literature published up until the end of 2020. Evidence derived from Cochrane reviews and other high-quality systematic reviews that synthesized evidence from randomized controlled trials and quasi-experimental studies examining the effectiveness of interventions was prioritized for inclusion. A first pass of title and abstract screening for relevance was conducted, followed by a full text screening that was done by at least 2 reviewers against inclusion criteria. Two reviewers independently filled a standardized data abstraction form to capture review characteristics, the characteristics of included studies and interventions (eg, age coverage, country representation, delivery platform), and pooled-effect estimates (eg, risk ratios, odds ratios, mean differences, 95% confidence intervals) derived from meta-analyses where they were reported. The main outcomes of interest across the reviews included measures of child morbidity, mortality, development, academic achievement, and mental and physical well-being. The extracted data were then matched among reviewers to check for errors and ensure consistency, and then consolidated into a single table for inclusion in the article. The AMSTAR 2 tool12 was used for review quality assessment, and was also conducted in duplicate, with any disagreements in ratings resolved by consensus or the involvement of a third reviewer.If for a given domain the initial evidence-mapping exercise revealed that the existing evidence-synthesis literature was lacking for the school-age group, we proceeded with a conventional systematic review of primary literature. All systematic reviews were reported in accordance with the reporting guidance provided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses criteria.13Search strategies were developed using the population, intervention, control, and outcomes methodology, relevant medical subject headings terms, and keywords derived from the scoping search. The search terms were adapted for use in other bibliographic databases in combination with database-specific filters for controlled trials, where these were available. Searches for the individual, domain-specific reviews were conducted in a variety of databases, including but not limited to: PubMed, Embase, Medline, PsycINFO, Ovid SP, The Cochrane Library, Cochrane Central Register of Controlled Trials, Cochrane Methodology Register, and the World Health Organization regional databases. Evidence derived from LMIC was prioritized for synthesis, though evidence from high-income countries (HIC) settings was leveraged to highlight whether effective interventions exist in cases where LMIC evidence was sparse. Gray literature searches and additional hand searching were conducted in Google Scholar and reference lists of relevant articles, book chapters, and reviews.After removal of duplicate studies, a multistage screening process was performed to select studies that met the eligibility criteria. Each title and abstract was assessed by at least 1 reviewer, who excluded those that were deemed irrelevant. At the full-text review stage, at least 2 reviewers assessed all full texts. Any disagreements in inclusion decisions were resolved by discussion and, where necessary, by consulting a third reviewer. At this stage, reasons for exclusion were documented. The methods section of each individual review in this supplement describes their selection and eligibility criteria, which differed depending on the child health domain being assessed. Data from included studies were independently extracted and coded by 2 review authors using standardized, previously piloted data extraction forms, which sought general study characteristics, details of the population, intervention, comparison groups, and quantitative outcome data. Data extraction forms were matched and checked, and if necessary, a third review author was consulted in the event of any disagreements to establish consensus.Assessment of risk of bias for included studies was conducted according to criteria and tools outlined in the Cochrane Effective Practice and Organization of Care guidelines14 for randomized trials, nonrandomized trials, controlled before–after, interrupted time series, and the Cochrane Handbook for Systematic Reviews of Interventions.15 Assessments were conducted independently by 2 review authors; scores were compared, and a final risk of bias judgement was reported for the included studies of each systematic review. Randomized trials were assessed using the Cochrane Risk of Bias tool15 across the following domains: randomization process, deviations from the intended interventions (blinding of personnel, participants, and outcome assessment), missing outcome data, outcome measurement, the selection of the reported result, disclosure of funding, and conflicts of interest. Studies were assigned an overall risk of bias judgement accordingly (low risk, high risk, or some concerns/medium risk). Quasi-experimental study designs were assessed using the Risk of Bias Tool for Nonrandomized Studies of Interventions (ROBINS-I) tool.15,16 Studies were assessed according to the following domains: bias because of confounding, bias in selection of study participants, bias in classification of interventions, bias because of deviations from intended interventions, bias because of missing data, bias in measurement of outcomes, and bias in selection of the reported result. Each study was assigned an overall risk of bias judgement (low, moderate, serious, and critical risk).Meta-analyses were conducted where possible using Review Manager 5.4 software.17 Randomized controlled trials and cluster-randomized controlled trials were analyzed separately from quasi-experimental study designs. To mitigate heterogeneity within included studies, a random-effects meta-analysis was used for pooled outcomes. For those situations where meta-analysis was not possible, data on the effect of interventions from individual studies was tabulated and reported, and a narrative synthesis was conducted for each key intervention domain.Where there were a sufficient quantity of comparable studies (in both interventions and outcome), a summary of the intervention effect and a measure of quality for key outcomes were produced using the Grading of Recommendations Assessment, Development and Evaluation approach.18 The Grading of Recommendations Assessment, Development and Evaluation approach considers 5 domains (study limitations, consistency of effect, imprecision, indirectness, and publication bias) to assess the quality of the body of evidence for each outcome. The evidence was downgraded from “high quality” by 1 level for serious (or by 2 levels for very serious) limitations, depending on assessments for risk of bias, indirectness of evidence, serious inconsistency, imprecision of effect estimates, or potential publication bias.The aim of the authors of this supplement of reviews is to comprehensively assess the available evidence for the effectiveness of interventions to improve health and well-being in school-age children and adolescents. The initial literature-scoping and evidence-mapping process, followed by the different review approaches taken, has helped to maximize the scope covered across this set of reviews, and has allowed us to provide the most comprehensive assessment of the state of the published literature covering interventions for school-age children and adolescents. The individual reviews in this supplement have also highlighted child health domain-specific gaps in the evidence for both primary literature in the school-age group, and gaps in existing evidence syntheses.It is important to note that, for the reviews within this supplement, the descriptions of intervention effects are meant to provide an overview of what is currently known in terms of evidence for effectiveness, and do not imply that other interventions were ineffective simply because there was an evidence gap. Given the limited space and large scope, it was only possible to provide the highlights of specific comparisons and outcomes in each of the results sections. Comprehensive tables of study characteristics, outcomes, and effect estimates are provided in both the main articles and appendices.Although we were specifically interested in focusing on LMIC research, this was only feasible for a few review topics (eg, sexual and reproductive health and rights, neglected tropical diseases) because of a dearth of literature. Instead of being used to attempt to generalize their effectiveness to LMIC settings, evidence from intervention effectiveness in HIC settings are included and described to establish that effective interventions do indeed exist and may differ in their impact between settings. This approach has previously been used in the context of adolescent health interventions.19 This evidence from HIC could act as a starting point for future research and implementation in various LMIC settings, with program components tailored to local contexts.In the case of those reviews taking the overview of systematic reviews approach, we were limited to including only those primary studies already included in systematic reviews and could not cover each subdomain in depth. Thus, we were unable to identify and include those primary studies that may not have been included in systematic reviews because of studies not being identified in review authors’ database searches, not meeting their inclusion criteria, or falling out of the time frame of the review. Furthermore, some systematic reviews of primary literature were unable to perform meta-analyses because of high heterogeneity or a lack of high-quality evidence from randomized trials, which makes synthesizing the existing evidence more difficult.
- Research Article
54
- 10.1186/s13750-017-0102-2
- Sep 20, 2017
- Environmental Evidence
The eligibility screening step of a systematic review or systematic map (sometimes referred to as ‘study selection’, ‘evidence selection’ or ‘inclusion screening’) determines the scope of the evidence that may answer the review or map question. Eligibility screening involves the development, testing and application of eligibility criteria (inclusion and exclusion criteria) by an evidence synthesis review team, based on methods pre-specified in the review or map protocol. Some parts of the process require judgement, meaning that consistent and transparent reporting of the eligibility criteria and the process for applying them are essential in order to reduce the risk of introducing errors or bias. The existing Collaboration for Environmental Evidence (CEE) Guidelines for Systematic Reviews in Environmental Management (version 4.2, March 2013) give relatively limited guidance on how to conduct eligibility screening. In this paper we provide more in-depth information on good practice methods for this step of evidence synthesis, based on a critical consideration of existing guidance and current practice. Our aim is to provide recommendations to support those conducting CEE systematic reviews or systematic maps for environmental management questions; however, the methods we describe are generic and should be broadly applicable across a wide range of environmental research topics.
- Research Article
3
- 10.1016/j.envsci.2021.12.019
- Jan 13, 2022
- Environmental Science & Policy
Do environmental systematic reviews impact policy and practice? Author perspectives on the application of their work
- Front Matter
25
- 10.1186/s13750-017-0092-0
- May 31, 2017
- Environmental Evidence
The first international Collaboration for Environmental Evidence (CEE) conference took place in August 2016 at the Swedish Museum of Natural History in Stockholm with nearly 100 participants from 14 countries. This conference reflected and contributed to the growth of a global network of people interested in the production and use of evidence syntheses in environmental management. The conference also provided an opportunity to identify emerging themes and reflect on those ideas and perspectives to help direct future activities of the CEE and the broader community. An increasingly engaged community of practice was evident but there is uneven distribution of experience, resources, capacity, and commitment to evidence synthesis in different sectors and regions. There is much opportunity to bring academics, practitioners, and other partners together which will help to further demonstrate impact of evidence synthesis activities and enhance relevance. As the discipline evolves there is growing interest in rapid evidence synthesis but the benefits and risks of that approach remain unclear. There was also a recognition that improvements in empirical science will enhance the likelihood that more studies can be fully exploited as part of evidence synthesis. There are opportunities for capacity building, engaging the next generation (e.g., students), and enhancing connections within and beyond the CEE community to advance evidence-based environmental management. It is our desire that this paper will serve as a template for future CEE activities (i.e., where to invest resources) but also as an invitation to those that were unable to attend to participate in CEE and the evidence-based environmental management movement in whichever ways resonate with them.
- Research Article
- 10.1002/cl2.96
- Jan 1, 2012
- Campbell Systematic Reviews
PROTOCOL: Systematic Review Protocol: Later School Start Times for Supporting the Education, Health and Well‐being of High School Students
- Front Matter
8
- 10.1016/j.jclinepi.2018.06.001
- Jun 15, 2018
- Journal of clinical epidemiology
The need for consensus on consensus methods
- Research Article
26
- 10.1186/s13643-018-0893-4
- Jan 14, 2019
- Systematic Reviews
BackgroundSystematic reviews of research evidence have become an expected basis for decisions about practice guidelines and policy decisions in the health and welfare sectors. Review authors define inclusion criteria to help them determine which studies to search for and include in their reviews. However, these studies may still vary in the extent to which they reflect the context of interest in the review question. While most review authors would agree that systematic reviews should be relevant and useful for decision makers, there appears to be few well known, if any, established methods for supporting review authors to assess the transferability of review findings to the context of interest in the review. With this systematic mapping and content analysis, we aim to identify whether there exists checklists to support review authors in considering transferability early in the systematic review process. The secondary aim was to develop a comprehensive list of factors that influence transferability as discussed in existing checklists.MethodsWe conducted a systematic mapping of checklists and performed a content analysis of the checklist criteria included in the identified checklists. In June 2016, we conducted a systematic search of eight databases to identify checklists to assess transferability of findings from primary or secondary research, without limitations related to publication type, status, language, or date. We also conducted a gray literature search and searched the EQUATOR repository of checklists for any relevant document. We used search terms such as modified versions of the terms “transferability,” “applicability,” “generalizability,” etc. and “checklist,” “guideline,” “tool,” “criteria,” etc. We did not include papers that discussed transferability at a theoretical level or checklists to assess the transferability of guidelines to local contexts.ResultsOur search resulted in 11,752 titles which were screened independently by two review authors. The 101 articles which were considered potentially relevant were subsequently read by two authors, independently in full text and assessed for inclusion. We identified 31 relevant checklists. Six of these examined transferability of economic evaluations, and 25 examined transferability of primary or secondary research findings in health (n = 23) or social welfare (n = 2). The content analysis is based on the 25 health and social welfare checklists. We identified seven themes under which we grouped categories of checklist criteria: population, intervention, implementation context (immediate), comparison intervention, outcomes, environmental context, and researcher conduct.ConclusionsWe identified a variety of checklists intended to support end users (researchers, review authors, practitioners, etc.) to assess transferability or related concepts. While four of these checklists are intended for use in systematic reviews of effectiveness, we found no checklists for qualitative evidence syntheses or for the field of social welfare practice or policy. Furthermore, none of the identified checklists for review authors included guidance to on how to assess transferability, or present assessments in a systematic review. The results of the content analysis can serve as the basis for developing a comprehensive list of factors to be used in an approach to support review authors in systematically and transparently considering transferability from the beginning of the review process.
- Research Article
2
- 10.1002/cl2.1284
- Oct 17, 2022
- Campbell systematic reviews
Systematic reviews are increasingly used to inform decision-making in health, education, social care and environmental protection. However, decision makers still experience barriers to using reviews, including not knowing how findings might translate to their own contexts, and lack of collaboration with systematic review authors. The TRANSFER approach is a novel method that aims to support review authors to systematically and transparently collaborate with stakeholders to consider context and the transferability of review findings from the beginning of the review process. Such collaboration is intended to improve the usefulness and relevance of review findings for decision makers. We aim to explore the user experience of the TRANSFER approach conversation guide, and in doing so gain a better understanding of the role and perceived value of stakeholder engagement in systematic reviews for informed decision-making. We conducted four user tests of groups using the guide, organized around simulated meetings between review authors and stakeholders. Review authors led the meeting using the TRANSFER approach conversation guide. We audio-recorded and observed the meetings, collected feedback forms and conducted semi-structured interviews with review authors following the meeting. We analysed the data using framework analysis to examine the user experience of the TRANSFER approach conversation guide and of stakeholder engagement more generally. Seventeen participants in four user groups participated in the user tests. Most participants were generally positive toward the structured approach using the conversation guide, and felt it would be useful in systematic review projects. We observed examples of misunderstanding of the terminology included in the guide, and received multiple suggestions for how to make the conversation guide more user friendly. We observed numerous challenges related to the hypothetical nature of a user test, including lack of familiarity with the review question/topic among participants and lack of preparation for the meeting. Review authors and stakeholders are positive toward using a structured approach to guide collaboration within the context of a systematic review. The TRANSFER conversation guide helps participants to discuss the review question and context in a structured way. Such structured collaboration could help to improve the usefulness and relevance of systematic reviews for decision making by improving the review question, inclusion criteria and consideration of transferability of review findings. The conversation guide needs to be modified to improve user experience. Further research is needed to explore stakeholder collaboration and the use of the TRANSFER conversation guide in systematic review processes.
- Ask R Discovery
- Chat PDF