Abstract

Conceptual understanding of individualized learning and adaptive teaching varies broadly, encompassing a multitude of instructional strategies, approaches, and activities. It stretches from accounts as narrow and specific as scaffolding adaptive feedback in computer-based instruction (e.g., Atkinson, Renkl & Merrill, 2003) to more general conceptions, such as cooperative and collaborative learning (e.g., Johnson & Johnson, 2002). It also includes educational concepts derived from elements of constructivism, such as discovery learning, inquiry-based learning, experiential learning, problem-based learning and other forms of student-centered education. These instructional forms, which have been broadly described from the 1960s (e.g., Summerhill), were recently criticized by Kirshner, Sweller & Clark (2006) with a response published by Tobias & Duffy (2009). The learning sciences have further contributed to the distinction between social constructivism and individual constructivism (i.e., instructional system designs) providing a theoretical grounding for teacher vs. learner-based strategies (Kolodner, 2004). Current and developing applications, informed by pedagogical principles espoused by case-based learning (e.g., Kolodner et al., 2008), exemplify the transformation of learning environments which apply Bruner's concept of not just discovery for the student, but co-discovery on the part of the teacher. Examples of various attempts to make teaching and learning more adaptive can be found in both the early and current research literature. They include, though not limited to, mastery learning (e.g., Bloom, 1968), Personalized Systems of Instruction or PSI (e.g., Keller, 1968; Gifford & Vicks, 1982; Davies, 1981), assorted forms of peer instruction (e.g., Mazur, 1997), various reciprocal reading/writing activities (e.g., Huang & Yang, 2015; MacArthur, Schwartz & Graham, 1991), adaptive hypermedia (Brusilovsky, 2001), accommodation for individual learning styles (e.g., Özyurt & Özyurt, 2015) and more recent Intelligent Tutoring Systems or ITS (e.g., Huang & Shiu, 2012; VanLehn, 2011). To some extent, findings of primary research on these and related instructional practices have been summarized in two rather sparse collections of meta-analyses separated in time by almost three decades. In the late 1970s and early 1980s, several relevant meta-analyses were published. First, Lysakowski and Walberg (1982), Gusky and Gates (1986), Slavin (1986) and Kulik, Kulik and Bangert-Drowns (1990) each performed successive meta-analyses (Slavin's was a best evidence synthesis) on the efficacy of mastery learning. The studies produced equivocal findings. Also, Kulik, Kulik & Cohen (1979) reviewed 75 individual comparative studies of Keller's PSI (a spin-off of mastery learning) college teaching method. In comparison to conventional instruction the PSI was demonstrated to have a positive effect on student achievement and course perception (mean effect size of nearly 0.70 for both). Aiello and Wolfle (1980) summarized research on individualized instruction in science compared with traditional lectures and found that individualized instruction was more effective. Horak's (1981) study of self-paced modular instruction of elementary and secondary school math produced a wide variety of both positive and negative effect sizes. Bangert and Kulik (1982) looked at the effectiveness of the Individualized Systems of Instruction (ISI) in secondary school students. They broadened the list of outcomes to account not only for student achievement (e.g., final exam), but also critical thinking, attitudes toward subject matter, and student self-concept. For all outcome types the findings were unsettled. For example, for the achievement data only eight out of 49 studies demonstrated statistically significant results in favour of ISI (four studies favoured more conventional teaching methods and the rest were inconclusive). Finally in 1984, Kulik attempted a wider research synthesis (encompassing over 500 individual studies) of effectiveness of programmed instruction and ISI, paying special attention to the moderator variables of study dates and grade levels. Among the most promising findings, the author indicated that more recent studies showed higher effects than the earlier and that college-level students benefited significantly from using ISI compared with elementary and secondary school students. In summary, these meta-analyses produced inconclusive results. Moreover, they are rather outdated – practically none of the above-mentioned instructional methods exists now in its original form (e.g., Eyre, 2008 was able to identify fewer then 50 studies of PSI for the period between 1990 and 2006 in the PsycInfo database). All this suggests the need for a more refined (both methodologically and substantively) update of systematic reviews in the field, especially taking into account how much the methodology of meta-analysis itself has evolved since then. Several meta-analyses addressed the topic of individualized instruction, though in very specific narrowly focused forms. Cole (2014) examined the effectiveness of cooperative, collaborative, and peer tutoring for English language learners. A low-to-moderate average effect size of = 0.49 was found in favour of peer tutoring over individualized or teacher-centred comparison instructional conditions. The effect size tended to be relatively small in middle school students, but higher at elementary and high school levels. More in line with the already mentioned earlier meta-analyses of various forms of computer-assisted instruction, Ma, Adesope, Nesbit and Liu (2014) meta-analysed studies of Intelligent Tutoring Systems (ITS) in a variety of subject matters, from reading and math to law and medical education. The list of moderator variables included the type of both experimental and comparison treatments, as well as outcome type, student academic level, discipline studies, etc. The highest achievement effects of using ITS were found in comparison with non-ITS computer-based instruction ( = 0.57) and teacher-centred, large-group instruction ( = 0.42), whereas in comparison with human tutoring it was even negative ( = -0.11), though not statistically significant. ITS-based practices were similarly effective when used either alone or in combination with various forms of teacher-led instruction in many subject domains. In summary, research evidence concerning the effects of adaptive teaching and individualized learning remain relatively inconclusive, while there is obviously a need for better understanding how K-12 formal education may be more successful in addressing students’ personal needs and interests, accounting for their diverse abilities with the main goal of advancing their learning. The proposed systematic review will not only rely on the most rigorous and comprehensive methodology of meta-analysis, but also should be conceptually sound (i.e., thoroughly exploring educational practices to find consistent links among a multitude of individual pedagogical approaches) and timely (i.e., account for the most recent developments in education). The main research question of the proposed meta-analysis is: Can more Student-Centred (SC) (i.e., more adaptive and individualized) approaches to K-12 instruction be distinguished from more Teacher-Centred (TC) approaches in terms of their effect on student achievements and what substantive and demographic factors moderate these effects? For better understanding and more successful practical application, educational practices subsumed under this generic pedagogical idea of adaptive teaching and individualized learning deserve a valid conceptual working model, both inclusive enough to account for various forms of personalized/individualized instruction and sufficiently sensitive to fluctuations due not only to the influence of numerous moderator variables, but also to nuanced qualities of particular instructional approaches themselves. SC instructional strategies could, in our view, serve such an overarching conceptual framework with adequate explanatory power, but only if operationalized properly to avoid an oversimplified dichotomy of inductive vs. deductive education (constructivism vs. direct instruction). Indeed, we are less interested in deciding between these two extremes and more interested in understanding the circumstances or combination of circumstances that optimize teaching and learning. Gresalfi and Lester (2009) for mathematics teaching, and Klahr (2009) for science teaching argue that the goal of instruction should be to achieve curricular and process objectives by choosing the most appropriate method based on student age, ability, prior knowledge, level of content, etc. In this regard, we would like to avoid the conceptual error of falsely dichotomizing pedagogical environments as either TC or SC since neither instructional practice likely exists in its pure form. As Gersten et al. (2008) observed in their systematic review of mathematics teaching practices: “[We] found no examples of studies in which students were teaching themselves or each other without any teacher guidance; nor did the Task Group find studies in which teachers conveyed … content directly to students without any attention to their understanding or response. The fact that these terms, in practice, are neither clearly nor uniformly defined, nor are they true opposites, complicates the challenge of providing a review and synthesis of the literature …” (p. 12). Since SC pedagogical practices try to emphasize guidance over direct instruction, the question becomes how much and what kind of guidance is offered to students and who takes responsibility for the design and implementation of various components of learning experience to make them truly adaptive/individualized and, hence, more effective. To define the key quality of instruction as “adaptive” and “individualized,” for the purposes of the proposed systematic review, we suggest deconstructing teaching and learning according to the events associated with them (e.g., setting objectives, implementing instructional methods, assessing learning). Accordingly, a more SC (more adaptive) classroom is one in which students play a more central role in the conduct of the instructional events. If these events can be isolated in reports of primary classroom research, they can be rated individually on a TC to SC continuum. Each event could then be: 1) examined separately to determine their individual strengths; 2) examined in clusters as combinations of events; or 3) collapsed into a multi-dimensional composite that would yield a “greater-than to lesser-than” distinction between two different instructional settings. This approach avoids problems associated with either subjectively defining instructional conditions as SC vs. TC or vaguely labeling them, such as PSI, mastery learning, etc. It also has the advantage of allowing us to examine instructional events in isolation and in various combinations in the search for optimal instructional practices. Most of the significant effects from the meta-analyses described in the first section of this protocol on the topic cluster around 0.40SD, but the data also reflect a wide range of effects, depending on the whole spectrum of moderator variables. In other words, the picture painted by these meta-analyses remains in large part as inconclusive as it was several decades ago in the 1980s. Of special concern to us is the fact that both earlier and recent meta-analyses are rather limited in scope and focus of interest, addressing very specific instructional practices and technological tools. There were no serious attempts to find and conceptualize pedagogical commonalities among the interventions in question that would allow treating them within the same class of phenomena broadly depicted as individualized learning and adaptive teaching. Thus, the need for a review that would be broad in scope, summarize research evidence up to date, and have a conceptually sound foundation is pressing. The main objective of the proposed review is to summarize research data on the effectiveness (in terms of learning achievement outcomes) of adaptive and individualized instructional interventions operationally defined here as more SC pedagogical approaches. The overall weighted average effect size will be an indication of that. Additionally, and no less important, the review aims to better understand under what circumstances (e.g., with what populations of learners, for what subject matters) the effects of adaptive and individualized instruction rich their highest potential, and what conditions may depress them. To explore the latter, a set of substantive and demographic study features will be coded and subjected to moderator variable analyses. The review outcomes will inform education practitioners and research community of the best instructional practices, preconditions for their successful implementation and potential pitfalls, as well as of directions for further empirical research in the area. The review will include studies that are experimental (i.e., RCT) or high-quality quasi-experimental (i.e., statistically verified group equivalence or adjustment) in design that address adequate to the research question group comparisons, contain legitimate measures of academic achievement (i.e. teacher-made, standardized), and report sufficient statistical information for effect size extraction. Students in K-12 formal educational settings (approximate age 5-18), i.e., eventually leading to a certificate, diploma, degree, or promotion to a higher level. Educational interventions may take place either in the classroom (F2F), via distance education (DE), or as a blended (various combinations of F2F and DE) intervention. The highest total rating across these dimensions will determine an Intervention condition to be compared on achievement outcomes to a Comparison condition, lowest in total rating That is differential total across all dimensions, as some of them may be higher in ratings for one group and some – for another. At the same time, keeping individual positive and negative differentials will allow identifying consistent clusters of instructional events that are more/less likely to work to the advantage of students’ learning. All types of objective measures of academic achievements are to be considered. Their psychometric features (e.g., standardized, non-standardized teacher/researcher-made assessment tools) and type of representativeness (e.g., cumulative final examinations or averages of several performance tasks covering various components of the course/unit content) will be documented and used in subsequent moderator variable analyses. Self-assessments are to be excluded, as well as attitudinal and behavioural measures. Data of their prevalence in the reviewed primary literature will be collected to inform further reviews in the area with a potential focus on those types of outcomes. To maximize coverage of primary research, fully compatible in terms of outcome measures, only immediate post-test (that is assessment administered at the end of treatment implementation) results will be considered. Various forms of delayed post-tests will be documented and their time lags categorized to inform further reviews. K-12 formal educational settings (approximate age 5-18), in educational programs eventually leading to a certificate, diploma, degree, or advancement to the next academic level/grade are of interest to the current meta-analysis. Other settings (i.e., home schooling, auxiliary programs, summer camps, etc.) are to be excluded. Addressing studies from 2000 onward seems to strike a reasonable balance of covering various approaches to individualized/adaptive learning prominent throughout several decades (studies published in early 2000s would still reflect most interesting pedagogies of 1990s), while primarily focusing on those that retain relevance in most recent educational practices. In order to retrieve a broad base of studies to review, we will begin by having an experienced Information Specialist search across an array of bibliographic databases, both in the subject area and in related disciplines. The following databases will be targeted: ABI/Inform Global (ProQuest) Academic Search Complete (EBSCO) ERIC (EBSCO) PsycINFO (EBSCO) CBCA Education (ProQuest) Australian Education Index British Education Index Education Source (EBSCO) Web of Knowledge Scopus Engineering Village Francis (EBSCO) Medline ProQuest Dissertations & Theses Global ProQuest Education Database Linguistics and Language Behavior Abstracts (ProQuest) Database searching will be supplemented by searches of the Web using Google and Bing to locate additional articles, but also grey literature (research reports, conference papers, theses and research published outside the conventional journals). We will also search the OpenGrey.eu and the Learn Tech Lib online collections for grey literature, and will consult the Campbell guide (Hammerstrøm, Wade, & Jørgensen, 2010) for other useful online resources. Finally, the reference lists of identified literature reviews will be ‘branched’ for additional relevant studies using a citation search approach. The most recent issues of the top journals (based on inclusion rate) will be searched manually toward the end of the search process to catch any recent publications that match our screening criteria. When possible, we will contact noted experts in the field to ensure we have all their relevant research. Although the search strategy will be tailored to the features of the various databases, i.e. making use of database-specific controlled vocabulary and search filters, the following is representative of what the overall search statement would look like. (“student cent*” OR “learner cent*” OR “learner control” OR constructivi* OR “individualized instruction” OR “discovery learning” OR “active learning” OR scaffold* OR “experiential learning” OR “teacher guid*” OR “self-direct*” OR “problem based learning” OR inquiry OR “humanistic education” OR “democratic education” OR “progressive education” OR “adaptive learning” OR “adaptive education” OR “adaptive class*” OR “adaptive teach*” OR differentiation) AND (“creative teaching” OR “instructional innovation” OR “instructional effectiveness” OR “teaching methods” OR “program effectiveness” OR “program evaluation”) AND (compar* OR contrast* OR “control group” OR experiment* OR “matched group*” OR quasiexperiment* OR posttest OR “post test” OR “comparative case study”) True experimental and quasi-experimental studies are to be included as far as they feature two educational interventions covering the same content (required knowledge acquisition and/or skill development) as assessed on compatible outcome measures, where one group (experimental) is higher in student-centred qualities (as described earlier) compared to the other (control) group. Reporting quantitative data sufficient for an effect size extraction is a necessary condition for study inclusion. There are several major threats to the independence of findings. These are: 1) repeated use of data coming from the same participants; 2) reporting multiple outcomes of the same type; and 3) aggregating outcomes of different types representing the same sample of participants (does not apply to the proposed review, as it is limited to learning achievement outcomes only). The means that we will use for ensuring data independence are presented in the Within Study Synthesis sub-section below. In addition to coding dimensions of SC pedagogical qualities that would determine proper comparisons for effect size extraction, the following groups of study coding categories will be used in the proposed review. First, study methodological quality will be assessed for features such as design type, fidelity of treatment implementation, attrition, and the unit of assignment/analysis (Cooper, Hedges, & Valentine, 2009). Within the same category, we will code for outcome source and psychometric quality of the assessment tools, as well as for the precision of procedures used for effect size extraction and for equivalence of instructor and study materials. Jointly, these methodological study features, used in moderator variable analyses will inform us of any potential threat to all types of study validity (Cooper et al., 2009). Substantive study features will further clarify description of SC pedagogical qualities by specifying theoretical models underlying instructional practices under review, treatment duration, instructor's experience, provision of professional development for teachers and training for students, whenever is required by specific instructional intervention. Demographic study features will encompass learners’ age, educational background and ability level, as well as, subject matter studied. All these study features will be subsequently analysed as moderators for their potential impact on treatment effects. All coding activities (i.e., abstract screening, full-text review, study features coding, as well as effect size extraction) will be carried out by two reviewers working independently and discussing and resolving disagreements, when necessary eliciting a third opinion of the project P.I. Reliability rates (initial judgement), i.e., Pearson's r and Cohen's kappa – for continuous and ordinal data, respectively, will be calculated and reported. Effect Size Computation: Our review aims to include and summarize quantifiable achievement outcomes of primary empirical studies of high methodological quality that compare effectiveness of more SC (i.e., more adaptive and individualized) versus more TC (i.e., more conventional undifferentiated) instructional interventions. The following are the primary metrics and procedures that will be used for the effect sizes extraction and subsequent analyses. For studies that report descriptive statistics for continuous measures of student achievement outcomes, the post-intervention mean of the control group will be subtracted from the post-intervention mean of the intervention group and the resulting difference will be divided by the pooled standard deviation of both groups (Cohen's d). Outcome data are likely to be reported in a variety of formats. For studies that report inferential statistics such as t, F, or p-values only, the appropriate conversion formula will be applied to calculate the d-index as the effect size estimate (Lipsey & Wilson, 2001; Hedges, Shymansky, & Woodworth, 1989; Hedges & Olkin, 1985). To introduce proper corrections for the small sample size bias, all d-indices will be converted into the unbiased Hedge's g statistics using the following formula: To test for statistical significance of calculated effect sizes we will use standard error of g, and the 95% confidence intervals. In all calculations, to aggregate effect sizes across studies and for moderator variable analyses, we will use Comprehensive Meta Analysis 3.0 (Borenstein, Hedges, Higgins, & Rothstein, 2005) software package. Within Study Synthesis This review will focus on learning achievement outcomes. There is a possibility, however, of finding several measures representing the same outcome type in the same study based on the same sample of participants. When it happens we will either decide in favour of the most representative measure – typically, cumulative final exams or post-tests or when no single outcome fully reflects learning performance throughout the unit of instruction – we will average effects deriving from different (complementary or equally representative) achievement measures. Also, the same sets of participants are never to be used to calculate multiple effect sizes of the same type. Whenever the same group of participants is used repeatedly (for example, when the same control group compared to two different treatment groups, each unique in its SC qualities – for the purposes of retaining as much of explanatory capacity of the review), its sample size will be reduced proportionally in order to avoid any overrepresentation (i.e., disproportionally high weighs of the respective effects) of the same participants in the final data set. Across Study Synthesis Independent effect sizes will be aggregated across studies using the random effects model (Borenstein, Hedges, Higgins, & Rothstein, 2010), as neither of the assumptions (conceptually grounded uniformity of interventions and access to the entire population of relevant studies) for applying the fixed effect model is met. Reflecting on the nature of our review as a random (though comprehensive) sampling of various populations of empirical studies in education, the random effects model provides a more accurate measure of the treatment effectiveness. The results obtained from a random effects model analysis will represent the overall effect of a diverse collection of SC instructional intervention on student learning across age groups, subject matters, etc. Fixed effect model will be used to assess heterogeneity of the distribution of effect sizes in which the Q- statistic and I2 to determine the collective extent to which studies deviate from the fixed effect average for the collection. Also the I2, a statistic derived from Q, indicates the proportion of true heterogeneity (i.e., variability exceeding what would be expected based on the sampling error estimate) associated with each distribution of effect sizes. If the observed heterogeneity is above sampling error, coded study features, as potential sources of systematic variation, will be further explored through moderator variable analysis under the mixed effects model. It is necessary to assess potential bias that may be associated with out-of-range individual calculated effect sizes and may potentially distort the overall interpretation of the findings. Sensitivity analysis (Hedges & Olkin, 1985) is intended to determine whether the removal of a certain effect size increases the fit of the remaining effect sizes in a homogeneous distribution while not substantially affecting the interpretation of the recalculated mean effect size. Various approaches to identifying potential outliers will be used, including visual examination of data organized into a forest plots and also performing “one study removed” CMA routine. Identified outliers will be examined with the potential to remove them from the final dataset. Potential sources of bias, such as study design, type of treatment, publication source, missing data, sample size, or attrition, will be carefully examined through the corresponding moderator variable analyses. There is widely recognized concern that relying on published studies alone may substantially distort (misrepresent) the overall intervention effect. To assess potential publication bias in this review, we will visually inspect the resulting funnel plot and run the Duval & Tweedie's (2000) trim and fill routine in CMA (Borenstein et al., 2005). Also, the classical Fail-Safe N test will help determine the number of null effect studies needed to raise the p-value associated with the average effect above a specified level of α. Orwin's (1983) Fail Safe N will also be used to determine how many studies missing studies that, when added to the analysis, will bring the combined Hedges’ g below a specified threshold. For studies that do not report complete outcome data (that is, in exceptional cases when missing information from otherwise perfectly suitable studies is minimal – e.g., sample size), the first author of the study will be contacted to retrieve missing information. If needed data are unavailable, a process of data imputation may be conducted, where appropriate. In such cases, a sensitivity analysis needs to be conducted to assess the impact of the imputed data on the overall analysis and synthesis of the results. No qualitative research will be reviewed within the framework of this project. Methodological Study Features: Study research design: 1 = RCT 2 = Quasi-experimental Instructor equivalence: 1 = Same 2 = Different 999 = Missing information Content (study materials) equivalence: 1 = Same (highly compatible) 2 = Different (marginally compatible) 999 = Missing information Psychometric quality of the outcome assessment tool: 1 = Standardized test 2 = Modified standardized/Piloted (validated) origina measure 3 = Teacher/Researcher-made test 4 = Average of two of the above (When ES is calculated based on averaging several outcome measures) Source of outcome data: 1 = One-time cumulative measure (e.g., final exam) 2 = Composite measure reported in the study (e.g., course grades) 3 = Average of equally representative non-cumulative measures (e.g., series of projects/assignments) 4 = A single selected (most representative out of a number reported) measure Effect size extraction precision: 0 = Calculated from descriptive statistic 1 = Calculated from inferential statistics 2 = Estimated from exact p-values (i.e., no assumptions) 3 = Estimated with reasonable assumptions (e.g., sample size equivalence) 4 = Reported in the study (without an option of verifying/recalculating) Substantive (Instructional) Study Features: Instruction delivery mode (coded separately for experimental and control groups): 1 = F2F (Classroom Instruction) 2 = DE (Distance Education) 3 = BL (Blended Learning,) 4 = Computer automated program: in lab, class or on campus, with or without the presence of a lab assistant 5 = Computer automated program (at distance) 999 = Missing information Conceptual (pedagogical) framework: An open entry reflecting an explicitly stated theoretical framework which, the treatment is based upon (modelled after) – to be categorized for subsequent moderator variable analyses when all data are collected Instructor's experience with (training for) implementing the corresponding instructional intervention (coded separately for experimental and control groups): 1 = Yes 2 = No 999 = Missing information Treatment duration: Specify the number of weeks of treatment implementation. Demographic Study Features: Academic level (learners’ age) 1 = Kindergarten 2 = Elementary school (Grades 1-5) 3 = Secondary/Middle school (Grades 6-8) 4 = High school (Grades 9-12) 999 = Missing information Subject matter (discipline): An open entry reflecting an explicitly named course – to be categorized in various ways (e.g., STEM/non-STEM, natural/social sciences) for subsequent moderator variable analyses when all data are collected Learners’ ability (as defined in the study by splitting sample in sub-groups – e.g., by the results of a pre-test): 1 = High achievers 2 = Average achievers 3 = Low achievers 4 = No split (in vast majority of studies) Learners’ profile (characteristic of the entire sample in the study): 1 = Gifted (talented) students 2 = General (average, “garden variety”) population – assume when not specified otherwise 3 = Special needs students – specify which type of (e.g., 3: Learning disability or 3: Autistic children) Learners SES (similarly to the above): 1 = Privileged category 2 = General population 3 = Underprivileged – specify (if available) Study settings: 1 = Urban 2 = Rural 999 = Missing information Study geographic region: An open entry – specify the country. * The final composition of the list is a matter of actual frequencies of included studies published in the corresponding journals as established through the review process. Lead review author: The lead author is the person who develops and co-ordinates the review team, discusses and assigns roles for individual members of the review team, liaises with the editorial base and takes responsibility for the on-going updates of the review. Affiliation: Centre for the Study of Learning and Performance, Concordia University Affiliation: Centre for the Study of Learning and Performance, Concordia University Affiliation: Centre for the Study of Learning and Performance, Concordia University Affiliation: Centre for the Study of Learning and Performance, Concordia University Address: Room GA 2.133; 1211 St. Mathieu Email: dpickup@education.concordia.ca Please give brief description of content and methodological expertise within the review team. The recommended optimal review team composition includes at least one person on the review team who has content expertise, at least one person who has methodological expertise and at least one person who has statistical expertise. It is also recommended to have one person with information retrieval expertise. Richard F. Schmid has expertise in and has published on topics such as: application of technologies to improve pedagogy and training in the workplace and schools; analysis of learning strategies and collaborative techniques in in-class and distance education contexts; cognitive information processing using technologies, especially with young learners. The Centre for the Study of Learning and Performance (CSLP) Systematic Review Team (Leader: Robert M. Bernard) has been active since 2001 and has a long list of accomplishments. We have published five major meta-analyses in Review of Educational Research, AERA's premier review journal (1st/219 in Educational Research with an Impact Factor of 5.00). The team has also published seven other meta-analyses and systematic reviews in other journals and has presented papers in many scholarly venues, with at least one presentation per year at AERA's annual meeting. Members of the team have given workshops, some for the Campbell Collaboration, short courses, and invited methodological presentations, etc. in the U.S., Canada, Great Britain, Dubai, UAE, and several European countries, and have published articles about meta-analysis methodology (e.g., Bernard et al., 2014; Abrami & Bernard, 2012) in prominent research journals. David Pickup is an Information Specialist with 7 years experience working on systematic review projects. He has previously served (2009-2010) as the Education Trials Search Adviser for the Campbell Collaboration, providing consultations and peer review of search strategies. He continues to provide peer review services for Campbell protocols and reviews on an ad hoc basis. Bernard, R. M. [PI], Borokhovski, E., Schmid, R. M., Waddington, D. I., & Pickup, D. Jacobs Foundation and the Campbell Collaboration. “A Meta-Analysis of 21st Century Adaptive Teaching and Individualized Learning Operationalized as Specific Blends of Student-Centered Instructional Events.” ≈$50,000USD Abrami, P. C. [PI], Bernard, R. M. with other CSLP members. Fonds Québécois de la Recherche sur la Société et la Culture (FQRSC) “Instruments du savoir pour l'apprentissage. Soutien aux quips de recherché.” — Infrastructure Support: $708,000. There are no conflicts of interest. We expect to fully complete the review by December of 2017. By completing this form, you accept responsibility for preparing, maintaining and updating the review in accordance with Campbell Collaboration policy. The Campbell Collaboration will provide as much support as possible to assist with the preparation of the review. A draft review must be submitted to the relevant Coordinating Group within two years of protocol publication. If drafts are not submitted before the agreed deadlines, or if we are unable to contact you for an extended period, the relevant Coordinating Group has the right to de-register the title or transfer the title to alternative authors. The Coordinating Group also has the right to de-register or transfer the title if it does not meet the standards of the Coordinating Group and/or the Campbell Collaboration. You accept responsibility for maintaining the review in light of new evidence, comments and criticisms, and other developments, and updating the review at least once every five years, or, if requested, transferring responsibility for maintaining the review to others as agreed with the Coordinating Group. The support of the Coordinating Group in preparing your review is conditional upon your agreement to publish the protocol, finished review, and subsequent updates in the Campbell Library. The Campbell Collaboration places no restrictions on publication of the findings of a Campbell systematic review in a more abbreviated form as a journal article either before or after the publication of the monograph version in Campbell Systematic Reviews. Some journals, however, have restrictions that preclude publication of findings that have been, or will be, reported elsewhere and authors considering publication in such a journal should be aware of possible conflict with publication of the monograph version in Campbell Systematic Reviews. Publication in a journal after publication or in press status in Campbell Systematic Reviews should acknowledge the Campbell version and include a citation to it. Note that systematic reviews published in Campbell Systematic Reviews and co-registered with the Cochrane Collaboration may have additional requirements or restrictions for co-publication. Review authors accept responsibility for meeting any co-publication requirements. I understand the commitment required to undertake a Campbell review, and agree to publish in the Campbell Library. Signed on behalf of the authors:

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call