Still Searching for Understanding: The Importance of Diverse Research Designs, Methods, and Perspectives
Evidence-based medicine and evidence hierarchies have been widely adopted and have strongly influenced decision making across many fields, including clinical aphasiology. However, questions remain about the creation, usefulness, and validity of current evidence hierarchies. This article builds on ideas about scientific approaches and evidence originally shared by Elman (1995, 1998, 2006). This article reviews the history of evidence hierarchies and argues that improving the diversity of research designs, methods, and perspectives will improve understanding of the numerous and complex variables associated with aphasia intervention. Researchers and clinicians are encouraged to synthesize diverse types of scientific evidence. It is hoped that this article will stimulate thought and foster discussion in order to encourage high-caliber research of all types. Concepts from a wide variety of fields including philosophy of science, research design and methodology, and precision medicine are brought together in an attempt to focus research on the scientific understanding of aphasia treatment effects. It is hoped that by incorporating diverse research designs, methods, and perspectives, clinical aphasiologists will become better able to provide effective, personalized treatments, ensuring that each person with aphasia is able to improve their communication ability and quality of life.
- Front Matter
11
- 10.1016/j.jhsa.2005.08.003
- Sep 1, 2005
- The Journal of Hand Surgery
Levels of Evidence and the Journal of Hand Surgery
- Front Matter
3
- 10.1016/j.xnsj.2020.100019
- Aug 5, 2020
- North American Spine Society Journal (NASSJ)
Evidence-based medicine and clinical decision-making in spine surgery
- Research Article
200
- 10.1097/prs.0b013e3182195826
- Jul 1, 2011
- Plastic and Reconstructive Surgery
The Level of Evidence Pyramid: Indicating Levels of Evidence in Plastic and Reconstructive Surgery Articles
- Research Article
5
- 10.1542/peds.2020-049403
- Jul 1, 2021
- Pediatrics
Family Caregiver Partnerships in Palliative Care Research Design and Implementation.
- Research Article
4
- 10.1097/brs.0b013e318134eb03
- Sep 1, 2007
- Spine
Evidence-Based Medicine Summary Statement
- Research Article
28
- 10.1097/ta.0b013e318256dc4d
- Jun 1, 2012
- Journal of Trauma and Acute Care Surgery
Evidence-based medicine is "the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients."1 This becomes complicated when busy health care providers are faced with the task of summarizing "the current best evidence." Systematic reviews, such as those published by the Cochrane Collaboration,2 serve this purpose by appraising and distilling the daunting amount of information available. These reviews are commonly attached to a level of evidence, which gauges the confidence of estimates reported by existing studies. Thus, levels of evidence pertain to the knowledge generated by the summative collection of research on a specific topic. Although the evidence base can come from a single study, more often it is the final step of a long scientific journey, in which experts collect, appraise, and summarize the findings of several individual studies using a specific, standard methodology.1 Different systems to define hierarchy of evidence have been proposed by renowned groups including the pioneer Cochrane Collaboration,2 the Oxford Centre for Evidence-Based Medicine (OCEBM),3 the US Preventive Task Force,4 and the Evidence-Based Practice Center (EPC) program of the US Agency for Healthcare Research and Quality.5 The recently launched Grades of Recommendation, Assessment, Development, and Evaluation (GRADE)6 system follows a detailed stepwise process to rate evidence and to determine the strength of recommendations in systematic reviews, health technology assessments, and clinical practice guidelines. According to the GRADE group Web site, more than 50 organizations have endorsed their system, including the World Health Organization, the American College of Physicians, the American College of Chest Physicians, the American Endocrine Society, the American Thoracic Society, the Canadian Agency for Drugs and Technology in Health, and the UK's National Institute for Health and Clinical Excellence. The British Medical Journal encourages authors of clinical guidelines to use the GRADE system.7 The Cochrane Collaboration has also adopted the principles of the GRADE system for evaluating the quality of evidence for outcomes reported in systematic reviews.8 Yet, a systematic review is not always available; thus, how will busy health care providers manage the formidable volume of information that becomes available everyday? "How does the article I read today change (or not) what I will recommend to my patients tomorrow?" To address this imperative, several scientific journals have recently adapted grading systems to assess the level of evidence of individual articles in an effort to provide guidance to their readers.9–12 This year, The Journal of Trauma joined the discourse by requiring authors to assign levels of evidence to their own clinically oriented studies. As detailed previously, the existing grading systems (e.g., GRADE) were originally designed to rate the summative body of evidence and not the level of evidence in individual articles. The grading of evidence in individual studies is a middle step in determining the hierarchy of evidence and comprises a judgment regarding the confidence and uncertainty emanating from a particular study. Several aspects of the investigation are examined, including, but not limited to appropriate design to address well-formulated research questions, appropriately measured outcomes, assessment of inferential error, risk of bias, and control of confounding. The results of this appraisal will inform the reader about the level of uncertainty of the study's findings and how much it adds to the existing knowledge in that topic. As part of the process of assessing the overall level of evidence, the GRADE system rates the evidence from individual studies into one of four categories ranging from high to very low (Table 1).13 Study design is the GRADE's critical measure to classify the quality of the evidence: for therapeutic studies, randomized clinical trials (RCTs) always start as High and observational studies as Low. From this starting point, evidence may be downgraded or upgraded through the evaluation of several specific domains as follows: (1) risk of bias, (2) imprecision, (3) inconsistency, (4) indirectness, (5) publication bias, (6) effect size, (7) existence of dose-response pattern, and (8) effect of plausible confounding on findings (Table 2). When the study addresses diagnostic accuracy, however, a slightly different classification applies.14TABLE 1: GRADE Quality Assessment CriteriaTABLE 2: Factors That May Decrease or Increase the Quality of EvidenceGRADE is a somewhat complicated system, which, as its own authors recognize, involves an element of subjectivity.15 In addition, there is the negative connotation created by classifying a study as low quality, a term which could be construed as lack of scientific rigor on the part of the authors. As the PRISMA authors wisely put it, "quality is often the best the authors have been able to do," and recommended the term risk of bias instead.16 Quality should be assessed when accepting or rejecting a manuscript for publication, whereas the evidence level of individual studies involves judging a study's level of uncertainty and risk of bias. Rather than introducing a new term (such as risk of bias), with which readers and authors may not be familiar, we propose to use the established nomenclature "evidence level of individual studies" (ELIS). Consensus statements such as the GRADE system and the OCEBM guidelines can serve as the basis upon which to build a standard to assess ELIS. The proposed ELIS system retains study design as a major factor in the classification but recognizes that each type of clinical question (therapeutic, diagnostic accuracy, etc.) demands different types of study designs. The proposed ELIS framework (Table 3) is heavily based on the previous groundbreaking work from the GRADE workgroup, the OCEBM 2009 and 2011 guidelines, and the Journal of Bone and Joint Surgery's adaptation of OCEBM's materials, which have been well accepted by the scientific community and shown to have acceptable reliability.17,18 The determination of ELIS involves three steps.TABLE 3: Proposed Evidence Level of Individual Studies (ELIS)ELIS STEP 1: DEFINE STUDY TYPE Therapeutic and care management studies evaluate a treatment efficacy, effectiveness, and/or potential harm, including comparative effectiveness research and investigations focusing on adherence to standard protocols, recommendations, guidelines, and/or algorithms. Prognostic and epidemiologic19 studies assess the influence of selected predictive variables or risk factors on the outcome of a condition. These predictors are not under the control of the investigator(s). Epidemiologic investigations describe the incidence or prevalence of disease or other clinical phenomena, risk factors, diagnosis, prognosis or prediction of specific clinical outcomes, and investigations on the quality of health care. Diagnostic tests or criteria 20 studies describe the validity and applicability of diagnostic tests/procedures or of sets of diagnostic criteria used to define certain conditions (e.g., definition of adult respiratory distress syndrome, multiple organ failure, or postinjury coagulopathy). Economic and value-based evaluations focus on which type of care management can provide the highest quality or greatest benefit for the least cost. Several types of economic evaluation studies exist, including cost-benefit, cost-effectiveness, and cost-utility analyses. More recently, Porter21,22 proposed value-based health care evaluations, in which value was defined as the health outcomes achieved per dollar spent. Systematic reviews and meta-analyses (SR/MA) evaluate the body of evidence on a topic; meta-analyses specifically include the quantitative pooling of data. Guidelines are systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances.23 ELIS STEP 2: DEFINE THE RESEARCH DESIGN Table 3 reflects a different hierarchy of designs for each of the previously mentioned study types. For therapeutic studies, RCTs remain the paragon of biomedical research, and other treatment study designs will still rank lower than level I evidence. This is because the processes used to conduct RCTs minimize the risk of confounding factors influencing the results. As a result, the findings generated by RCTs are likely to be closer to the true effect than the findings generated by other research methods. Prognostic studies allow for more flexibility in study design, with cohort prospective studies with preestablished hypotheses generating less uncertainty and consequently stronger evidence than those of case-control designs. It is important that we clearly define case-series versus comparative, cohort versus case-control, and prospective versus retrospective studies. Case-series studies evaluate a group of patients submitted to a type of care/procedure/test without a suitable comparison group. A comparison group can be a group of patients with similar characteristics who received a different type of care/procedure/test or, alternatively, the investigator can compare the same group of patients before and after an intervention. It is not difficult to realize that the lack of a comparator makes us less confident in the evidence and less likely to adopt the new procedure. Of course, if this is an innovative treatment of a lethal disease for which there is no available treatment, we may adopt it even with low confidence for lack of better options. The urgency to adopt the new treatment, however, does not change the fact that our confidence is still low and that further research will be crucial to increase our confidence level. Once we determine that there is a comparator group, we can define whether this is a cohort or case-control study. The fundamental difference between these two designs lies on when the investigators determine the exposure/risk factor and the outcome.3 In case-control studies, the outcome is determined first and the exposure/risk factor/intervention later. For example, Wu et al.24 used a case-control design to compare the bone mineral density of 87 elderly patients with hip fractures to 87 elderly patients without hip fractures and found it to be significantly lower in the first group. In cohort studies, investigators define first the exposure/risk factor and then assess their outcome of interest. For example, Lin et al. enrolled a cohort of 217 elderly patients with hip fractures in their study and evaluated a risk factor defined as body mass index ratio between the greater trochanter and the femoral neck. In case-control studies, a group with the outcome (the "cases") is compared with a group without the outcome but otherwise similar (the "controls") regarding something that happened to them before they experienced the outcome. In cohort studies, a group of patients with at least one common characteristic (the "cohort") is assessed for the development of outcome(s). This distinction can get confusing when the authors compare, for example, survivors to nonsurvivors regarding a specific risk factor. In this case, readers will know that the study is a cohort if both survivors and nonsurvivors were consecutive patients with a common risk factor (e.g., trauma). To make things more confusing, a case-control study is sometimes a later offspring of a well-planned cohort study, as in the case-control study by Shaz et al.25 on postinjury coagulopathy. Although some of the most important medical discoveries were done through case-control studies,26 this design has several limitations that place it lower on the evidence hierarchy. These include, but are not limited to, potential for bias in the selection of the control group and uncontrolled confounding in the assessment of the risk factor/exposure. In this proposed ELIS, the terms prospective and retrospective refer to the intention underlying data collection, rather than when data were actually retrieved. If the data were compiled to answer a predefined set of research questions, then this is a prospective study, regardless of whether data were accrued concurrently with care or after the fact through records review. Conversely, the use of data to answer a question unrelated to the original question for which the data were gathered is a retrospective analysis. Some clinical databases are prospectively planned to answer a broad set of predetermined questions (e.g., the Denver MOF database constructed to assess early risk factors for postinjury multiple organ failure).27 Information recorded for other purposes (e.g., medical records, operating room registries, claims data) can only produce a retrospective analysis. Disease registries (e.g., trauma registries) are a point of contention because data collection occurs concurrently with care (commonly reported as "data were prospectively collected" or "patients were prospectively included into a registry"). The major strength of these registries lies on the quality of the data collected because factors such as recall bias and missing data are less likely. Yet, they can generate both prospective studies of preestablished outcomes and retrospective studies when used for not preestablished outcomes. Why is this distinction (retrospective vs. prospective) important in establishing the ELIS? It is important because retrospective studies are more subject to biases (e.g., relevant variables may not have been included or were measured using different methods) that decrease our confidence in their results. Furthermore, retrospective multiple unplanned comparisons increase the potential of a type I error, as explained in greater detail later in this article. ELIS STEP 3: ASSESS THE STRENGTHS AND LIMITATIONS OF THE STUDY THAT WILL AFFECT THE UNCERTAINTY OF THE RESULTS The next step in determining the ELIS recognizes that all research designs, even RCTs, are more or less limited by confounding, bias, inadequate sample size and statistical power, heterogeneity of included populations, differences between control and study groups, missing data, loss to follow up, and so on. All these factors affect the uncertainty around study outcomes. We combined some of the GRADE-defined factors (Table 2) with the earlier OCEBM table (Table 4) to modify the ELIS.TABLE 4: Oxford Center for Evidence-Based Medicine Levels of Evidence (March 2009)To define the magnitude of effect, we assessed the size of the relative risk (RR) within the context of disease severity. Thus, for a moderately severe condition (with low-to-moderate morbidity/mortality), a large effect was defined as a high RR (>5 or <0.2), whereas for more severe diseases, only a moderate-to-large RR (2–5 or 0.2–0.5) was required. The statistical power of the study is a critical aspect in determining ELIS. It is usually easier to first define the situations where statistical power is not important: once the study detects a significant difference for an a priori stated hypothesis, the issue of statistical power is irrelevant. When investigators conduct multiple unplanned comparisons, then the potential for type I error (the error of finding a difference when in fact there is not one, usually set at <0.05) increases. Statistical power becomes relevant when no significant difference is detected and we want to gauge the type II error (the error of not finding a difference when in fact there is one). When this happens, authors may declare "failure to detect a significant difference" and should provide the statistical power for detecting the observed (or predetermined) difference (generally accepted as adequate when >80%). This is often the case when assessing whether randomization was successful and the two RCT groups do not show statistical differences; or for secondary outcomes for which the study was not powered. In an alternative scenario, which is becoming more common with the popularity of comparative effectiveness studies, the authors may aim at declaring bioequivalence or noninferiority. In this case, power must be determined with as much rigor as we usually determine significance, thus requiring levels greater than 90%. Of course, akin to the well-known p < 0.05, statistical power levels are arbitrary and should reflect the specific topic of the study. Studies of lethal conditions without a known treatment may require lower confidence levels (e.g., p < 0.10 or p < 0.15) to establish a significant difference or lower statistical power to declare noninferiority whereas investigations of low morbidity/mortality conditions with established treatments may require higher power or confidence levels. Furthermore, differences can be statistically significant and clinically meaningless. In sum, statistical power and confidence are functions of the clinical question being answered by the study, not after-the-fact considerations. Other ELIS modifiers were included in two sets of "negative criteria" at the bottom of Table 3. One set is for general types of studies and includes confounding, bias, loss to follow-up, missing data, and heterogeneity of the populations. We contemplated using dose-response as factor, as recommended by the GRADE group, but decided that this was a difficult element to define in the instructions for authors. Especially in trauma and acute care, assessment of dose-response patterns can be complicated by survivorship bias.28 Instead, we encourage our reviewers to take dose-response gradients into consideration on a case-by-case basis. As mentioned previously, heterogeneity of populations must be taken into consideration when appraising a study, particularly multi-institutional studies (even when RCT is the design), studies including condition(s) caused by different pathogenic mechanisms (e.g., patients with sepsis, patients with critical illness), and national and international disease registries. Heterogeneity is also a major concern in SR/MA; thus, this element was incorporated to the second set of negative criteria, specific to SR/MA. A final note refers to procedures to ensure the quality, integrity, and internal validity of collected data. Especially when studies deal with large data sets recorded by multiple abstractors, we strongly encourage authors to describe, albeit briefly, these procedures (e.g., 10% of the records were reabstracted, and intrarater reliability was assessed by the κ statistic). For diagnostic studies, the consistent use of a "gold" standard is typically the defining factor. When all patients with a specified condition are submitted both to the test (or set of diagnostic criteria) under investigation and the "gold" standard, the result is a powerful design. Uncertainty arises when only a group of patients with the specified condition (e.g., patients who are more severely injured, "at the attending physician's discretion") is submitted to the "gold" standard test. The quality of the "gold" standard is, of course, a pivotal issue. We are all aware that, often, there are no ideal standards, capable of a precise discrimination between outcomes. In addition, the consistent application of the "gold" standard is sometimes neither ethical (e.g., submit all patients with abdominal pain to endoscopy and biopsy) nor possible (e.g., autopsy for all fatalities). Yet, although unavoidable, this is a limitation that affects our confidence in the results; thus, it must be reflected in the ELIS. ELIS AND STANDARDIZED REPORTING The ELIS can only be assessed if all essential elements are included in the report. The article must contain all necessary information for the study to be replicated by others, including sampling, refusal and attrition rates, randomization methods (in the case of RCTs), confounding control, risk adjustment, potential for bias, and statistical analysis. For that purpose, following standardized reporting guidelines (CONSORT, PRISMA, etc.) is pivotal. The Enhancing the Quality and Transparency Of health Research (EQUATOR) group's Web site29 is an excellent source of standardized reports, and we recommend it to authors submitting articles to the Journal of Trauma. This ELIS classification does not reflect the scientific rigor or research integrity of the study. A study that is essentially flawed because it did not follow rigorous scientific methodology does not bring new knowledge, and consequently, the likelihood of publication should be very low. With ELIS, we propose to gauge some of the uncertainty of a study's results, which will frame its application to current practice. In addition, the ELIS must be used in conjunction with the PICO framework's determination of similarity, that is, whether the patients [P], interventions [I], comparators [C], and outcomes [O] in the trial are similar enough to justify application of the trial results to the provider's patient population.30 It is understandable that an author would not like to define his/her study as "low evidence." Presenting level II or III evidence, however, should not be regarded as demeaning. Specific areas in health and health care, such as trauma, impose immense difficulties and moral dilemmas to the implementation of RCTs. Study designs reflect the realities of diverse settings and ethical imperatives.1 In a recent editorial, Vincent comments on the limitations of RCTs in the intensive care unit population and highlights the importance of considering other study designs in the challenging intensive care unit environment.31 Yet, recognizing these difficulties does not change the level of uncertainty associated with specific study designs, limited risk adjustment, and high risk of bias. Thus, our proposed ELIS system does not intend to define the merit of a study but rather to convey its level of uncertainty. In addition, we propose a new framework for the Discussion section, in which the authors place their results in context and explain the ELIS of their study. We invite authors to use the Discussion section to assist health care providers in answering the question stated at the beginning of this article: "How does the article I read today changes (or not) what I will recommend to my patients tomorrow?" We encourage authors to describe how the study contributes to existing knowledge about the topic and provide guidance on how results should be used by their readers in their current clinical practice using the PICO framework. As appropriate, investigators should also propose new studies likely to increase the level of evidence of existing research. In sum, we have proposed a system to appraise the level of uncertainty of individual studies tailored to the needs of surgical studies, especially those dealing with emergent care. We look forward to feedback from our readership. DISCLOSURE The authors declare no conflicts of interest.
- Research Article
1297
- 10.1016/j.healun.2012.09.013
- Jan 24, 2013
- The Journal of Heart and Lung Transplantation
The 2013 International Society for Heart and Lung Transplantation Guidelines for mechanical circulatory support: Executive summary
- Research Article
11
- 10.1016/j.ijom.2016.10.018
- Mar 31, 2017
- International journal of oral and maxillofacial surgery
Recurrent dislocation: scientific evidence and management following a systematic review
- Research Article
455
- 10.1161/cir.0000000000000266
- Oct 14, 2015
- Circulation
Of late there has been a debate on whether green revolution has reduced absolute poverty among farm families in India. Most of the studies examining the issue relate to the all-India rural sector. But since the green revolution has not spread evenly in all the regions, the changes in the level of poverty reported in these istudies do not strictly relate to the phenomenon. Haryana is one of those few regions where new agricultural technology has spread more widely than others and therefore the experience of its farmers should provide us a better picture of how poverty among farmers changes with the spread of new farming technology.
- Front Matter
29
- 10.1016/j.ajodo.2017.03.020
- Jun 24, 2017
- American Journal of Orthodontics and Dentofacial Orthopedics
Evidence-based practice and the evidence pyramid: A 21st century orthodontic odyssey.
- Research Article
2
- 10.1093/mtp/miu020
- Jan 1, 2014
- Music Therapy Perspectives
The purpose of this study was to systematically examine the levels of evidence from articles published in the Journal of Music Therapy (JMT) from 2000 – 2009 using the classification taxonomy established by Melnyk and Fineout-Overholt (2005). Most JMT studies were Level VI (single descriptive or qualitative study, n = 83, 45.36%) or Level II (randomized & controlled trial, n = 32, 17.49%). The populations most studied were other (n = 31, 16.94%), nondisabled persons (n = 24, 13.12%), medical/surgical (n = 16, 8.74%), Alzheimer’s/dementia (n = 12, 6.56%), and school-age populations (n = 12, 6.56%). As many systematic reviews only include Level II evidence, there is a need for additional randomized controlled trials. The variety of research designs and clinical populations are a testament to the breadth of JMT and the profession. Limitations, implications, and suggestions for future research are provided.
- Front Matter
- 10.1097/01.prs.0000794864.89776.57
- Oct 26, 2021
- Plastic & Reconstructive Surgery
So You Want to Be an Evidence-Based Plastic Surgeon? A Lifelong Journey.
- Dissertation
1
- 10.14264/uql.2017.61
- Dec 21, 2016
Aphasia treatment research lacks a consistent approach to outcome measurement. There is heterogeneity in the outcome measures used across treatment trials and a lack of research evidence exploring the outcome constructs which are most important to key stakeholders. The efficiency, relevancy, transparency, and overall quality of aphasia treatment research could be increased through the development of a core outcome set (COS)—an agreed standardised set of outcomes for use in treatment trials. The overarching aim of this research was to generate evidence-based recommendations for outcome constructs and outcome measures for a COS for aphasia treatment research. The thesis is comprised of a review of the literature (chapter 2) and two phases of research: (1) a trilogy of stakeholder consensus studies and a synthesis of the results; and (2) a scoping systematic review of studies reporting the measurement properties of standardised outcome instruments validated with people with aphasia. The World Health Organization International Classification of Functioning Disability and Health (ICF) was used across all studies to provide a common framework for the analysis of results. Study 1 aimed to gain consensus on important aphasia treatment outcomes from the perspective of people with aphasia and their families. A total of 39 people with aphasia and 29 family members participated in one of 16 nominal groups across seven countries. Qualitative content analysis revealed six themes describing: (1) Improved communication; (2) Increased life participation; (3) Changed attitudes through increased awareness and education about aphasia; (4) Recovered normality; (5) Improved physical and emotional well-being; and (6) Improved health services (people with aphasia) and Improved health and support services (family members). Prioritised outcomes for both participant groups linked to all ICF components; primarily Activity/Participation (39%) and Body Functions (36%) for people with aphasia, and Activity/Participation (49%) and Environmental Factors (28%) for family members. Outcomes prioritised by family members relating to the person with aphasia, primarily linked to Body Functions (60%). Study 2 aimed to gain consensus on important aphasia treatment outcomes from the perspective of aphasia treatment researchers. Purposively sampled researchers were invited to participate in a three-round e-Delphi exercise. Eighty researchers commenced round 1, with 72 completing the entire survey. High response rates (≥85%) were achieved in subsequent rounds. Researchers reached consensus that it is essential to measure language function and specific patient-reported outcomes (impact of treatment; communication-related quality of life; satisfaction with intervention; satisfaction with ability to communicate; and satisfaction with participation) in all aphasia treatment research. Outcomes reaching consensus linked to all ICF components. Study 3 aimed to gain consensus on important treatment outcomes from the perspective of aphasia clinicians and managers, again using a three-round e-Delphi exercise. In total, 265 clinicians and 53 managers (n=318) from 25 countries participated in round 1. A total of 51 outcomes reached consensus after the third round. The two outcomes with the highest levels of consensus both related to communication in the dyad. Outcomes relating to people with aphasia most frequently linked to the ICF Activity/Participation component (52%), whilst outcomes relating to significant others were evenly divided between the Activity/Participation component (36%) and Environmental Factors (36%). The results of studies 1-3 were synthesised through a comparison of ICF coding (study 4). Results revealed that important outcomes from aphasia treatment occur at all levels of the ICF. Within these components, congruence across three or more stakeholder groups was evident for outcomes which related to Mental functions (Emotional functions, Mental functions of language, Energy and drive functions); Communication (Communicating by language, signs and symbols, receiving and producing messages, conversations, and using communication devices and techniques); Services, systems, and policies (Health services, systems and policies), and quality of life. Study 5 was a scoping systematic review of studies reporting the measurement properties of standardised outcome instruments which have been validated with people with aphasia. In total, 184 references for 79 outcomes instruments were included in the review. The vast majority of outcome instruments related to Body Functions (n=49). No outcome instruments were reported to primarily measure constructs relating to Environmental Factors. A number of outcome instruments measured constructs which did not fall within the ICF, these included measures of quality of life (n=7), life satisfaction (n=1), and knowledge about aphasia and stroke (n=1). This program of research identified that important aphasia treatment outcomes span the ICF and also go beyond – encompassing quality of life. Stakeholders reported outcomes relating to: language; emotional wellbeing; communication; health services; and quality of life should be measured routinely. This research has highlighted the large number of outcome instruments available for use with people with aphasia, which predominately measure Body Functions. Targeted development of appropriate instruments is required in some construct areas. Outcome constructs identified in phase 1 of this research were paired with outcome instruments identified in phase 2, to provide recommendations for an international COS consensus meeting.
- Research Article
1078
- 10.1161/str.0000000000000407
- May 17, 2022
- Stroke
2022 Guideline for the Management of Patients With Spontaneous Intracerebral Hemorrhage: A Guideline From the American Heart Association/American Stroke Association.
- Research Article
4
- 10.5204/mcj.2797
- Aug 20, 2021
- M/C Journal
“What's the brief?” is an everyday question within the graphic design process. Moreover, the concept and importance of a design brief is overtly understood well beyond design practice itself—especially among stakeholders who work with designers and clients who commission design services. Indeed, a design brief is often an assumed and expected physical or metaphoric artefact for guiding the creative process. When a brief is lacking, incomplete or unclear, it can render an already ambiguous graphic design process and discipline even more fraught with misinterpretation. Nevertheless, even in wider design discourse, there appears to be little research on design briefs and the briefing process (Jones and Askland; Paton and Dorst). It seems astonishing that, even in Peter Phillips’s 2014 edition of Creating the Perfect Design Brief, he feels compelled to comment that “there are still no books available about design briefs” and that the topic is only “vaguely” covered within design education (21). While Phillips’s assertion is debatable if one draws purely from online vernacular sources or professional guides, it is supported by the lack of scholarly attention paid to the design brief. Graphic design briefs are often mentioned within design books, journals, and online sources. However, this article argues that the format, function and use of such briefs are largely assumed and rarely identified and studied. Even within the broader field of design research, the tendency appears to be to default to “the design brief” as an assumed shorthand, supporting Phillips’s argument about the nebulous nature of the topic. As this article contextualises, this is further problematised by insufficient attention cast on graphic design itself as a specific discipline. This article emerges from a wider, multi-stage creative practice study into graphic design practice, that used experimental performative design research methods to investigate graphic designers’ professional relationships with stakeholders (Meron, Strangely). The article engages with specific outcomes from that study that relate to the design brief. The article also explores existing literature and research and argues for academics, the design industry, and educationalists, to focus closer attention on the design brief. It concludes by suggesting that experimental and collaborative design methods offers potential for future research into the design brief. Contextualising the Design Brief It is critical to differentiate the graphic design brief from the operational briefs of architectural design (Blyth and Worthington; Khan) or those used in technical practices such as software development or IT systems design, which have extensive industry-formalised briefing practices and models such as the waterfall system (Petersen et al.) or more modern processes such as Agile (Martin). Software development and other technical design briefs are necessarily more formulaically structured than graphic design briefs. Their requirements are generally empirically and mechanistically located, and often mission-critical. In contrast, the conceptual nature of creative briefs in graphic design creates the potential for them to be arbitrarily interpreted. Even in wider design discourse, there appears to be little consistency about the form that a brief takes. Some sources indicate that a brief only requires one page (Elebute; Nov and Jones) or even a single line of text (Jones and Askland). At other times briefs are described as complex, high-level documents embedded within processes which designers respond to with the aim of producing end products to satisfy clients’ requirements (Ambrose; Patterson and Saville). Ashby and Johnson (40) refer to the design brief as a “solution neutral” statement, the aim being to avoid preconceptions or the narrowing of the creative possibilities of a project. Others describe a consultative (Walsh), collaborative and stakeholder-inclusive process (Phillips). The Scholarly Brief Within scholarly design research, briefs inevitably manifest as an assumed artefact or process within each project; but the reason for their use or antecedents for chosen formats are rarely addressed. For example, in “Creativity in the Design Process” (Dorst and Cross) some elements of the design brief are described. The authors also describe at what stage of the investigation the brief is introduced and present a partial example of the brief. However, there is no explanation of the form of the brief or the reasons behind it. They simply describe it as being typical for the design medium, adding that its use was considered a critical part of addressing the design problem. In a separate study within advertising (Johar et al.), researchers even admit that the omission of crucial elements from the brief—normally present in professional practice—had a detrimental effect on their results. Such examples indicate the importance of briefs for the design process, yet further illustrating the omission of direct engagement with the brief within the research design, methodology, and methods. One exception comes from a study amongst business students (Sadowska and Laffy) that used the design brief as a pedagogical tool and indicates that interaction with, and changes to, elements of a design brief impact the overall learning process of participants, with the brief functioning as a trigger for that process. Such acknowledgement of the agency of a design brief affirms its importance for professional designers (Koslow et al.; Phillips). This use of a brief as a research device informed my use of it as a reflective and motivational conduit when studying graphic designers’ perceptions of stakeholders, and this will be discussed shortly. The Professional Brief Professionally, the brief is a key method of communication between designers and stakeholders, serving numerous functions including: outlining creative requirements, audience, and project scope; confirming project requirements; and assigning and documenting roles, procedures, methods, and approval processes. The format of design briefs varies from complex multi-page procedural documents (Patterson and Saville; Ambrose) produced by marketing departments and sent to graphic design agencies, to simple statements (Jones and Askland; Elebute) from small to medium-sized businesses. These can be described as the initial proposition of the design brief, with the following interactions comprising the ongoing briefing process. However, research points to many concerns about the lack of adequate briefing information (Koslow, Sasser and Riordan). It has been noted (Murray) that, despite its centrality to graphic design, the briefing process rarely lives up to designers’ expectations or requirements, with the approach itself often haphazard. This reinforces the necessarily adaptive, flexible, and compromise-requiring nature of professional graphic design practice, referred to by design researchers (Cross; Paton and Dorst). However, rather than lauding these adaptive and flexible designer abilities as design attributes, such traits are often perceived by professional practitioners as unequal (Benson and Dresdow), having evolved by the imposition by stakeholders, rather than being embraced by graphic designers as positive designer skill-sets. The Indeterminate Brief With insufficient attention cast on graphic design as a specific scholarly discipline (Walker; Jacobs; Heller, Education), there is even less research on the briefing process within graphic design practice (Cumming). Literature from professional practice on the creation and function of graphic design briefs is often formulaic (Phillips) and fractured. It spans professional design bodies, to templates from mass-market printers (Kwik Kopy), to marketing-driven and brand-development approaches, in-house style guides, and instructional YouTube videos (David). A particularly clear summary comes from Britain’s Design Council. This example describes the importance of a good design brief, its requirements, and carries a broad checklist that includes the company background, project aims, and target audience. It even includes stylistic tips such as “don’t be afraid to use emotive language in a brief if you think it will generate a shared passion about the project” (Design Council). From a subjective perspective, these sources appear to contain sensible professional advice. However, with little scholarly research on the topic, how can we know that, for example, using emotive language best informs the design process? Why might this be helpful and desirable (or otherwise) for designers? These varied approaches highlight the indeterminate treatment of the design brief. Nevertheless, the very existence of such diverse methods communicates a pattern of acknowledgement of the criticality of the brief, as well as the desire, by professional bodies, commentators, and suppliers, to ensure that both designers and stakeholders engage effectively with the briefing process. Thus, with such a pedagogic gap in graphic design discourse, scholarly research into the design brief has the potential to inform vernacular and formal educational resources. Researching the Design Brief The research study from which this article emerges (Meron, Strangely) yielded outcomes from face-to-face interviews with eleven (deidentified) graphic designers about their perceptions of design practice, with particular regard to their professional relationships with other creative stakeholders. The study also surveyed online discussions from graphic design forums and blog posts. This first stage of research uncovered feelings of lacking organisational gravitas, creative ownership, professional confidence, and design legitimacy among the designers in relation to stakeholders. A significant causal factor pointed to practitioners’ perceptions of lacking direct access to and involvement with key sources of creative inspiration and information; one
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.