Becoming a teacher when teacher labor markets expand: are there adverse long-term effects on students’ learning outcomes?
ABSTRACT Do expanding teacher labor markets attract individuals of different quality to the teaching profession? I study this in the context of Germany's large secondary-school expansions since the 1970s, which created substantial swings in teacher hiring. Using rich survey data on stu-dents in 2010/11, I show that rapid expansion periods attracted teachers who negatively impact their students’ test scores. These teacher cohorts also had weaker academic records and different work motivations than teachers hired in more typical years. The findings add rare evidence on how education policies shape teacher quality.
- Research Article
2
- 10.1002/cl2.127
- Jan 1, 2014
- Campbell Systematic Reviews
Education is internationally understood to be a fundamental human right that offers individuals the opportunity to live healthy and meaningful lives. Evidence from around the world also indicates that education is vital for economic and social development, as it contributes to economic growth and poverty reduction, sustains health and well-being, and lays the foundations for open and cohesive societies (UNESCO, 2o14). In recognition of the vital importance of education, governments across the globe have made a substantial effort to expand and improve their education systems, as they strive to meet the Education for All goals, adopted by the international community in 1990. These efforts have borne remarkable results; it is estimated that the number of out-of-school children has halved over the last decade (ibid, p. 53). However, there are still serious barriers to overcome, particularly in terms of access, completion and learning (Krishnaratne, White, & Carpenter, 2013). Access to education - particularly for girls, poor children and children in conflict-affected areas - remains a crucial issue. The 2013 Global Monitoring Reports claims that an estimated 57 million children are still out of school, over half of whom are in sub-Saharan Africa (UNESCO, 2014, p.53).1 Furthermore, despite increases in enrolment numbers, there has been almost no change since 1999 in the percentage of students dropping out before the end of the primary cycle. The evidence also indicates that many children enrolled in school are not learning. Recent estimates suggest that around 130 million children who have completed at least four years of school still cannot read, write or perform basic calculations (UNESCO, 2014, p. 191). Many governments have attempted to address this worrying situation, while also improving efficiency and reducing costs within the education sector, by decentralising decision-making processes. Decisions about curricula, finance, management, and teachers can all be taken at one or more of several administrative levels: centrally at the national or federal state level, by provinces/regions within a country, by districts or by schools. The devolution of decision-making authority to schools has been widely adopted as the preferred model by many international agencies, including the World Bank, the U.S. Agency for International Development (USAID) and the UK Department for International Development (DFID), as it is assumed that locating decision-making authority within schools will increase accountability, efficiency and responsiveness to local needs (Gertler, Patrinos, & Rubio-Codina, 2008). Often described as 'school-based' or 'community based' management, the devolution of decision-making authority to schools includes a wide variety of models and mechanisms. These differ in terms of which decisions are devolved (and how many), to whom decision-making authority is given, and how the decentralisation process is implemented (i.e., through 'top-down' or 'bottom-up' processes). School-based decision-making can be used to describe models in which decisions are taken by an individual principal or head teacher, by a professional management committee within a school, or by a management committee involving local community members. This last model may simply imply an increased role for parents in the management and activities of the school, or it may result in more active provision of training and materials to empower broader community involvement (Krishnaratne et al., 2013). The devolved decisions can be financial (e.g. decisions about how resources should be allocated within a school; decisions about raising funds for particular activities within a school; etc.), managerial (e.g. human resource decisions, such as the monitoring of teacher performance and the power to hire and fire teachers; decisions relating to the management of school buildings and other infrastructure; etc) or related to the curriculum and/or pedagogy (e.g. decisions related to the articulation of a school's curriculum; decisions about how elements of a national curriculum will be taught and assessed within a given school; etc.). In order to support the process of decision-making, many models involve some means of providing information to community members on the performance of an individual school (or school district) relative to other schools (Barrera-Osorio & Linden, 2009). All of these models and mechanisms are considered to potentially increase accountability and responsiveness to local needs by bringing local community members into more direct contact with schools, and to increase efficiency by making financial decisions more transparent to communities, thereby reducing corruption and incentivising investment in high quality teachers and materials. For the purposes of this review, 'school-based decision-making' includes any model in which at least some of the responsibility for making decisions about planning, management and/or the raising or allocation of resources is located within schools and their proximal institutions (e.g. community organisations), as opposed to government authorities at the central, regional or district level. The 'intervention' considered within this review, therefore, is any reform in which decision-making authority is devolved to the level of the school. Within this broad definition, we anticipate that the available evidence will relate to the three main mechanisms outlined above: (1) devolving decision-making around management to the school level; (2) devolving decision-making around funding to the school level; and (3) devolving decision-making around curriculum, pedagogy and other aspects of the classroom environment to the school level. School-based decision-making is widely promoted by donors in lower-income countries as a means for improving educational quality and is often taken up enthusiastically by national governments. Both generally articulate the ultimate outcome of school-based decision-making models as being a positive change in student outcomes (including but not restricted to learning outcomes). In addition to learning outcomes (most often measured through standardised tests of cognitive skills), there are many other possible student learning outcomes which may be valued by schools, donors and governments, such as improved student ability to demonstrate psychosocial and 'non-cognitive' skills. Changes in student aspirations, attitudes (such as increased appreciation of diverse perspectives) and behaviours (such as the adoption of safe sex practices) could also be considered important educational outcomes. However, it is clear that devolving decision-making to the level of the school does not lead directly to such outcomes. Rather, school-based decision-making is likely to impact on outcomes via a number of causal pathways. Reforms that increase accountability and responsiveness to local needs are assumed to lead to positive stakeholder perceptions of (and engagement in) educational provision, which, in turn, is expected to increase enrolment, attendance and retention and to reduce corruption within schools. It is also presumed that increased accountability will encourage schools to make recruitment decisions on the basis of teacher performance, rather than mechanically relying on qualifications or allowing for nepotism to interfere. Such personnel practices, in turn, are seen to lead to reduced teacher absenteeism, increased teacher motivation and, ultimately, improvements in the quality of teaching within schools. It is also assumed that local communities will encourage schools to adopt more locally relevant curricula, which can then have a positive impact on the quality of teaching and student opportunities to learn. At the same time, decentralised funding mechanisms and other reforms aimed at increasing efficiency within schools, particularly when combined with efforts to increase community participation, are presumed to result in more resources being available to schools, another important factor in improving educational quality (Krishnaratne et al., 2013). Increased efficiency is, in turn, assumed to affect the cost of educational provision, a proximal outcome highly valued by governments in less well-resourced settings. School-based decision-making mechanisms, therefore, result in a number of proximal (or intermediate) outcomes, in addition to the final outcomes mentioned above. These proximal outcomes include increased enrolment, improved equality of access, improved attendance, improved retention, improved progression, and higher quality educational provision. Furthermore, there is growing evidence that decentralisation reforms may actually have unintended and sometimes negative effects in certain political and economic circumstances (Banerjee et al., 2008; Bardhan & Mookherjee, 2000, 2005; Carr-Hill, Hopkins, Lintott, & Riddell, 1999; Condy, 1998; Glassman, Naidoo & Wood, 2007; Pherali, Smith & Vaux, 2011; Rocha Menocal & Sharma, 2008; Rose, 2003; Unterhalter, 2012). Decentralising decision-making may lead to elite capture at the local level and/or further corruption within school systems, for example, or may limit educational opportunity for marginalised ethnic groups. There is some consensus in this literature that decentralisation is only likely to have a positive impact on outcomes when (a) there is clear government policy and/or regulations about the powers and role played by different agencies and stakeholders; (b) there are sufficient financial resources available within the system; and (c) there is some form of democratic culture (see De Grauwe et al., 2005; Lugaz et al., 2010; Pherali et al., 2011). This body of evidence highlights the contingency of the effects of decentralisation, linked to important interactions between formal structures of decision-making and informal structures of power and authority within bureaucracies, communities and schools. In addition to the ways in which enabling or constraining conditions and circumstances can alter the outcomes of school-based decision-making reforms, it is clear that differences in implementation can also affect outcomes. Those vested with the authority to make decisions on behalf of the school must have the capacity and knowledge to make such decisions, or their decisions are unlikely to have a positive impact on outcomes (World Bank, 2004). Furthermore, each link in the causal chain rests on certain assumptions which must be met in order for a change in the location of decision-making to have the desired effect(s). For instance, the assertion that involving parents and community members in the hiring and firing of teachers (an 'accountability' mechanism employed in many contexts) will improve quality of teaching rests on the assumption that (a) parents and community members will be able to identify high quality teachers who should be retained and/or rewarded and (b) the incentives provided will positively impact student learning. This is not always achieved. In some contexts, teacher incentive schemes have been found to have a negative impact on overall student learning, if, for instance, they create perverse incentives for teachers to block the enrolment of low-performing students in order to maintain high average test scores within their classrooms (Glewwe, Ilias, & Kremer, 2003). The impact of school-based decision-making models is, therefore, likely to differ depending on a wide variety of implementation factors, relating to the objective of the reform, the particular decisions that are devolved, the individuals given decision-making authority and the nature of the decision-making process. Figure 1 (below) is a visual depiction of our understanding of the causal pathways, contributing factors and underlying processes that appear to affect the impact of school-based decision-making on educational outcomes. Our conceptual framework is not presented here as a definitive map of the existing evidence. Rather, it is proposed as a 'working hypothesis' to help guide the implementation of this review (Oliver, Dickson & Newman, 2012, p. 68). As such, we have used the framework to generate specific review questions and define our review methodology (as recommended by Anderson et al., 2011). We plan to significantly revise, modify and potentially simplify (or disaggregate) the framework during the review process, in order to more accurately reflect the current body of evidence related to school-based decision-making in lower-income contexts. This may include articulating separate theories of change for some of the individual mechanisms, depending on the evidence available. Conceptual Framework Source: Original Although the rhetoric around decentralisation suggests that school-based management has a positive effect on educational outcomes, there is limited evidence from low-income countries of this general relationship. In reality, much of the decentralisation literature focuses exclusively on the proximal outcomes of school-based decision-making (described above). This is likely due to the relative ease of measuring such outcomes, as well as the shorter time period generally required to identify impact on intermediate outcomes. Evidence from the U.S. suggests that there can be a time lag of up to 8 years between the implementation of a school-based management model and any observable impact on student test scores, although intermediate effects may be more rapidly identifiable (World Bank, 2007, p. 13). This may explain why studies with different time scales have found mixed evidence around the impact of school-based management models on student learning outcomes (Barrera-Osorio & Linden, 2009; Jimenez & Sawada, 2003; Sawada & Ragatz, 2005). As a result of these trends within the empirical literature, existing reviews on school-based decision-making have also tended to focus on proximal outcomes (e.g. Guerrero, Leon, Zapata, Sugimaru, & Cueto, 2012, on teacher absenteeism; Petrosino, Morgan, Fronius, Tanner-Smith, & Boruch, 2012, on student enrolment). There are very few that consider the full range of relevant outcomes, including student learning. Those that do have tended to focus exclusively on one particular mechanism (e.g. Bruns, Filmer & Patrinos, 2012, on accountability reforms), rather than considering the full range of school-based decision-making models. The comprehensive reviews that do exist (e.g. Santibanez, 2007; World Bank, 2007) need updating, as they (a) rely on literature that is now nearly ten years out of date, (b) focus almost exclusively on Central America, referencing almost no evidence from other low- or middle-income countries, and (c) do not report the use of systematic searches, critical appraisal and statistical synthesis of study effect sizes. There is, therefore, a need for a current globally-comprehensive systematic review of the impact of school-based decision making on a wide range of educational outcomes. Furthermore, existing reviews on this topic tell us almost nothing about why school-based decision-making has positive or negative effects in different circumstances. The exclusive focus on evidence collected through impact evaluations and quasi-experimental designs has significantly limited the policy relevance of these reviews as this approach has (a) resulted in a very small (<60) number of studies and (b) prevented any analysis of the conditions and circumstances under which school-based decision-making models can have a positive impact. We anticipate that the outcomes of this review will be useful for a wide range of stakeholders. In particular, policy-makers, at both the national and supranational levels, will benefit from the evidence linking decentralised decision-making processes to a wide range of potential outcomes and the analysis of underlying conditions that affect impact. School-based management is a key component of education reform across the world, and it is a particular focus of education activities sponsored by many of the core development agencies, including the World Bank, USAID and DFID. It is, therefore, crucial that we gain deeper understanding of how school-based decision-making affects a broad range of educational outcomes in both positive and negative ways and how such models can be strengthened and improved. The timing of this review will help to increase the potential impact of the results, as it coincides with ongoing conversations within the development community around the most appropriate focus (and strategies) for the next round of international development goals post-2015 (see http://post2015.org/; http://www.beyond2015.org). This review aims to answer the following overarching review question: What is the evidence around how decentralising decision-making to the school level affects educational outcomes in low and middle income contexts (LMICs)? The primary objective of the study, therefore, is to gather, assess and synthesise the existing evidence around how the decentralisation of decision-making to schools affects a broad range of educational outcomes in LMICs (question 1 above). This objective will be accomplished by examining the results of causal studies (e.g. those with an appropriate counterfactual) that consider the impact of at least one model of school-based decision-making on any of the proximal or final outcomes depicted in the conceptual framework above. Such analysis will allow us to report on all relevant quantitative measures of educational outcomes. Although we recognise that focusing on quantitative studies may preclude our ability to discuss outcomes usually considered harder-to-measure, we anticipate that the results will be useful, both for illuminating the ways in which school-based decision-making models do impact outcomes and for highlighting the current gaps in the evidence base. We also aim to draw conclusions about why particular models of school-based management work in some lower-income country contexts (and not in others), in order to make determinations about the particular contextual and implementation factors which act as barriers to – or enablers of – effective outcomes (question 2 above). This objective will be accomplished by examining evidence collected through a broader range of studies, including but not limited to that obtained from the included studies referenced in response to question 1. Given the broader scope of this second review question, studies do not need to be causal in nature in order to be included. In addition to examining the overall (positive and negative) effects of decentralisation processes on outcomes, we aim within this review to examine how changes in decision-making processes might impact differentially on diverse groups within societies. We are particularly concerned with groups which have historically experienced poor service delivery and/or demonstrated poor educational outcomes (e.g. marginalised or low-performing students). This will be accomplished by examining: (1) whether the interventions outlined in the included studies specifically target particular populations and (2) whether the included studies report any sub-group analysis for such populations. These objectives will be accomplished through the implementation of a high quality systematic review, relying on existing methodological guidance from the Campbell Collaboration and the EPPI-Centre at the Institute of Education (e.g. Becker et al., undated; Gough, Oliver & Thomas, 2012; Hammerstrom, 2009; Shadish & Myers, 2004). As this review aims to both aggregate the demonstrated effects of school-based decision-making on educational outcomes and draw conclusions around the conditions and circumstances that can affect outcomes, we have elected to conduct a mixed methods review, following the guidelines developed by Snilstveit (2012) for 'effectiveness plus' systematic reviews in international development. As such, we will use our conceptual framework throughout the review to guide the search strategy, decisions regarding the inclusion and exclusion of studies, coding, and synthesis. In keeping with 'effectiveness plus' review methodology, we will also consider different kinds of evidence in relation to our two review sub-questions. As the first review question is an 'effectiveness' question, studies included for synthesis will need to have an appropriate comparator or control group (or to have employed an appropriate method of constructing a counterfactual or control for confounding during analysis). However, a broader range of evidence, including studies based on qualitative data, will be reviewed in response to the second sub-question, as other methods are likely to be particularly useful for clarifying which external conditions and/or implementation factors may substantially affect outcomes. Studies will be included in the review if they meet the following selection criteria. We will be looking exclusively at evidence related to primary and secondary schools in LMICS. Studies of both public and private sector provision will be included. In order to be included, studies must be based in at least one context classified the of a given as or to the World We will evidence collected in LMICs located within Central and or the We have 'school-based decision-making' for the purposes of this There two for this (1) As impact has been used only in the literature, we it important to use a broad in order to capture of literature to to the review and (2) constraining our search to only particular models of school-based decision-making, we it likely that we potentially across models which may be found to have a impact on particular outcomes. Given the need for we have elected to include any study that an at least one of the three school-based decision-making mechanisms outlined in the conceptual framework school management reforms, funding reforms, or This is likely to include a of particular such as school management school and school and of models has not been developed a as to allow for the broader possible range of potentially In to the first review question, we are likely to between groups in which no school-based decision-making reform has been attempted and groups in which some school-based decision-making reform has been We may also between groups in which different school-based decision-making reforms have been attempted (e.g. funding reforms school management Both will be included, although they will be from one another during synthesis. must be in between the interventions must have been implemented during the same time and, in between a reform group and a must reflect the same time groups are not a for inclusion in relation to the second review As school-based models of decision-making can a wide range of outcomes positive and we will not be studies on the basis of a of outcomes. However, for inclusion in to both review studies must the between school-based decision-making and at least one educational outcome equality of access, increased or student learning as by test scores, psychosocial and Studies are which at the level of the or at community or (e.g. district) level, as well as the level of the school. Studies based on these different methods and of will be in the synthesis (see Studies will be in relation to this question which do not quantitative information on proximal or final outcomes, or which groups at country level or Given the wide of studies likely to be included in the review, we will assess the of all included studies to synthesis of (see studies in to the second review question will need to meet the of and out in the to on the of in order to be included for synthesis. Studies of any and studies with will be included. However, during coding, the specific will be for each included study, that we can consider differences that are likely to affect synthesis. As members of our are in and we to include studies in any of these Studies in other will be are available. We will include and literature, including and process in the empirical evidence (such as and empirical studies and/or negative results and the will also be included. The first four search will be at the of the review process. systematic reviews will first be through the of the EPPI-Centre of Education and the Campbell Collaboration The for any potentially relevant reviews will be for potentially We will then conduct searches, with the support of our at the in the following and These resources have been they are likely to evidence that is relevant to the review questions while also a wide range of We have also made an effort to include resources that are likely to help us identify literature and literature within contexts. In we will search for potentially relevant in the following Education International of of Development of Education Education World World and World We will also out to a small of who are to have widely on school-based management, in order to if there might be potentially relevant studies that have been completed but are not systematic reviews (e.g. et al., have a of relevant studies on education decentralisation in countries to We will limit and to However, we will search of existing literature reviews (e.g. Santibanez, and World Bank, 2007) and systematic reviews (e.g. et al., to identify relevant literature, including studies before the search has been all potential and will be into and a will be We will then the process of and studies (described in more we have on our of studies for quality we will our final search by the of all included studies – and the of and to of our included studies – in order to identify any key that we might have during the any such are they will be included to quality on the of the we have a of terms which to be used in the main in to The of search terms in 1 has been developed through an process. members of the review proposed a of mechanisms and which have the literature on school-based management in test search then in and the decentralisation this of some terms for and LMICs and the since The test search in the and in search in the primary school of of these all of the by the first two searches, then by the review to generate further search terms for inclusion in the final search Our final search will be the following search In order to be by the studies must at least one from each in the or the In allowing for searches, terms will also be as terms in the search These terms by the
- Research Article
- 10.1177/09612033251317557
- Jan 27, 2025
- Lupus
BackgroundHydroxychloroquine is recommended for all patients with systemic lupus erythematosus (SLE) because of its efficacy and safety. Previous studies of antimalarial toxicity under non-experimental conditions have often grouped hydroxychloroquine and chloroquine. This study focuses on the long-term toxicity of antimalarial drugs in SLE patients at a single reference centre. The research seeks to identify trends in antimalarial toxicity and determine risk factors contributing to long-term adverse effects.Materials and MethodsRetrospective data were collected from electronic medical records of consecutively diagnosed SLE patients, followed for at least 5years, from 1998 to 2017. The outcome variable "antimalarial long-term adverse effect" was considered if the adverse effect occurred after at least 5 years of continuous antimalarial treatment. Hazard regression analysis was used to identify independent factors associated with long-term antimalarial adverse effects.ResultsThree hundred 22 patients followed for a median of 15 years were analysed. The mean age at SLE diagnosis was 33.4 years, and 91.3% were women. Antimalarial drugs were started in 314 (97.5%) patients. Adverse effects were observed in 55 (17.5%) patients, mainly macular toxicity (11.5%). The incidence of all types of toxicity was higher in the long-term users than in the short-term users (12.1% vs 5.4%; p < .001). Previous use of chloroquine (HR 3; 95%CI 1-9), anti-ß2-glycoprotein antibody positivity (HR 2; 95%CI 1-4) and hydroxychloroquine doses higher than 5mg/kg/day (HR 2.6; 95%CI 1-7) were identified as independent factors associated with long-term antimalarial toxicity.ConclusionsIn our experience over the past 20 years, almost all SLE patients were treated with antimalarials. Macular toxicity was the most common long-term adverse effect. Patients with previous use of chloroquine, higher than recommended doses of hydroxychloroquine, and positive anti-ß2-glycoprotein antibodies were more likely to develop long-term antimalarial toxicity.
- Research Article
53
- 10.4073/csr.2012.19
- Jan 1, 2012
- Campbell Systematic Reviews
Interventions in Developing Nations for Improving Primary and Secondary School Enrollment of Children: A Systematic Review
- Research Article
9
- 10.1080/08923647.2014.924697
- Jul 3, 2014
- American Journal of Distance Education
This study assessed the effect of design instructional video based on the Cognitive Theory of Multimedia Learning by applying segmentation and signaling on the learning outcome of students in an online technology integration course. The study assessed the correlation between students’ personal preferences (preferred learning styles and area of specialization) and their learning outcomes. A three-group pretest–posttest design was employed to assess whether there were significant differences in students’ test scores after they watched an instructional video. The results of the analysis of covariance (ANCOVA) analysis indicate that instructional design had a significant effect on students’ learning outcome. This effect was demonstrated by the statistically significant differences in students’ learning outcomes, with the highest scores achieved by students in the segmented and signaled video group and the lowest scores in the no-segmentation and no-signaling group. Moreover, results indicate that students’ learning preferences and area of specialization related significantly and positively to their learning outcomes. These findings suggest that the use of educational video in online courses has the potential to effectively improve students’ learning outcome; however, it requires design manipulation. The results also emphasize the importance of rethinking the one-size-fits-all approach in developing online course content and include consideration of the students’ learning preferences and area of specialization to optimize their learning.
- Research Article
6
- 10.3389/fpsyg.2022.943838
- Jul 22, 2022
- Frontiers in Psychology
Teachers have a very important role in determining the quality of the teaching-learning process and the students’ learning outcomes. Learning outcomes will optimally be achieved if it is supported by qualified teachers. One way to enhance the teachers’ performance is through instructional supervision which can be divided into two techniques, namely group and individual supervision techniques. Therefore, this study aims to find out the influence of instructional supervision techniques on the work motivation and performance of elementary school teachers. This study was conducted in East Java, Indonesia, and an explanatory research design was used. The sample was taken from 80 elementary school teachers in Malang and Blitar using a multi-stage random sampling technique. Data were collected through the use of questionnaires and documentation, and then they were analyzed by using the structural equation modeling technique. The result of this study showed that group supervision has a significant effect on teachers’ performance, whereas individual supervision influenced teachers’ work motivation and it affected their performance.
- Research Article
- 10.59231/edumania/8997
- Oct 1, 2023
- Edumania-An International Multidisciplinary Journal
Scores in tertiary institutions act as bench mark for determining proportion of students’ learning outcome, placement and graduation. This study assessed test scoring knowledge of tertiary institution lecturers in Zamfara State. The study determined proportion of lecturers with high, moderate and low scoring knowledge. It also finds out whether significant differences exist in lecturers’ tests scoring knowledge by field of knowledge. The study answered one research question and tested one hypothesis at 0.05 level of significance. Population of this study consists of 300 lecturers in Zamfara State that were sampled through multi stage sampling. Survey design was employed for the study. An adapted instrument titled Teachers Test Scoring Scale (TTSS) was used for data collection. Similarly, out of 300 instruments administered to lecturers only 289 were retrieved, yielding 96.3% response rate. ANOVA was used for data analysis. The study result revealed that 13% of lecturers were ranked having high knowledge of test scoring, 66% moderate and remaining 21% have low knowledge of test scoring. The finding also revealed that there is no significant difference in lecturers’ test-scoring knowledge by field of knowledge. From the findings, it was recommended that in-service training, workshops, monitoring of assessment from test construction to test scoring, and assessment policy such as preparing marking scheme can improve credibility in test scoring. Hence, enhance accurate and consistent students’ test scores.
- Research Article
- 10.15294/jbe.v7i1.22505
- Apr 30, 2018
- Journal of Biology Education
The purpose of this research is to develop an interactive multimedia based PBL that suitable for learning human excretory system, and to analyze its effectiveness to increase the learning outcomes and student activity. The research design, was Research and Development (R&D). This research has carried out in SMP N 1 Batang, with class VIII C as subject of research. The class selection was using purposive sampling technique. The results showed that interactive multimedia based PBL developed was valid according to experts as validator. Interactive multimedia based PBL human excretory system effective to be used in learning human excretory system, and student can improve their activity and score in the test. It can be concluded that the learning media of interactive multimedia based PBL is effective for human excretory learning and increase learning outcome.The purpose of this research is to develop an interactive multimedia based PBL that suitable for learning human excretory system, and to analyze its effectiveness to increase the learning outcomes and student activity. The research design, was Research and Development (R&D). This research has carried out in SMP N 1 Batang, with class VIII C as subject of research. The class selection was using purposive sampling technique. The results showed that interactive multimedia based PBL developed was valid according to experts as validator. Interactive multimedia based PBL human excretory system effective to be used in learning human excretory system, and student can improve their activity and score in the test. It can be concluded that the learning media of interactive multimedia based PBL is effective for human excretory learning and increase learning outcome.
- Research Article
- 10.31332/ai.v13i1.895
- May 30, 2018
- Al-Izzah: Jurnal Hasil-Hasil Penelitian
The aim of this research is to investigate the teachers’ competence and work motivation, as well as the students' achievement. It is specific for Islam education teachers (PAI); PAI teachers’ competence and work motivation, and to determine the students’ learning outcomes at Madrasah Tsanawiyah according to KKM of MTs Negeri 1 Serang. The research instrument used are questionnaire, test and documentation. The collected data was analyzed by using quantitative data with descriptive statistics, and multiple regression analysis. After the researcher conducted the hypothesis testing, the following conclusions were obtained: First, the teachers’ competence, teachers’ work motivation, and the students’ achievement are classified as medium category. Second, there is a significant influence of PAI teachers’ competence towards the students’ achievement at Madrasah Tsanawiyah based on KKM of MTs Negeri 1 Kabupatten Serang taking up to 5.3%. Third, there is a significant influence of PAI teachers’ work motivation towards students’ achievement at Madrasah Tsanawiyah based on KKM of MTs Negeri 1 Serang as much as 15,5%. Fourth, there is a significant influence of PAI teachers’ competence and work motivation toward students' achievement at Madrasah Tsanawiyah based on KKM of MTs Negeri 1 Serang at 5.5%. Finally, it can be concluded that the influence of PAI teachers’ competence and work motivation toward students' achievement at Madrasah Tsanawiyah based on KKM of MTs Negeri 1 Serang is only 5.5%.
- Research Article
- 10.1186/s12909-025-07826-z
- Oct 2, 2025
- BMC Medical Education
BackgroundArtificial intelligence-assisted teaching, as an innovative model that combines intelligent technology and personalized education, is increasingly being emphasized in higher medical education.MethodsThis study included 523 participants, with a valid response rate of 87.2%. An integrated model based on the ARCS motivation model and constructivist theory was developed to explore the factors influencing medical students’ learning outcomes in the context of AI-assisted instruction. Descriptive statistics were conducted using SPSS 23.0, and a structural equation model was constructed and validated using Amos 23.0. Mediation analysis was performed with Process (version 3.3.1).ResultsThe study confirmed that teaching quality had a positive effect on learning motivation (β = 0.645, P < 0.001) and learning outcomes (β = 0.128, P = 0.032). Learning motivation positively influenced learning attitude (β = 0.822, P < 0.001) and learning satisfaction (β = 0.350, P < 0.001). Learning attitude had a positive impact on both learning satisfaction (β = 0.530, P < 0.001) and learning outcomes (β = 0.232, P = 0.020). Learning satisfaction was also positively associated with learning outcomes (β = 0.415, P < 0.001). The external environment had a positive effect on learning motivation (β = 0.449, P < 0.001) and learning outcomes (β = 0.101, P = 0.033). Moreover, learning motivation played a significant mediating role in the relationships between teaching quality and learning outcomes (β_inmedia = 0.343, 95% CI [0.273, 0.414]), as well as between the external environment and learning outcomes (β_inmedia = 0.287, 95% CI [0.218, 0.355]).ConclusionTeaching quality and external environment indirectly enhance medical learning outcomes by strengthening learning motivation. Learning motivation plays a key role in shaping learning attitude, satisfaction, and outcomes, confirming the positive value of AI-assisted teaching in optimizing learning mechanisms. This study contributes to the application of AI-assisted teaching in medical education and provides empirical support for improving medical students’ learning performance.
- Research Article
- 10.61132/pragmatik.v2i4.1055
- Sep 6, 2024
- Pragmatik : Jurnal Rumpun Ilmu Bahasa dan Pendidikan
This study aims to improve students' learning outcomes in analyzing the intrinsic elements of short stories through the application of Team Games Tournament (TGT) learning model in class XI-D3 SMAN 9 Surabaya. This research was conducted in two cycles, with each cycle including four stages: planning, action, observation, and reflection. Data were collected through classroom observation, field notes, learning outcome tests, and analysis of intrinsic elements of short stories by students. The results showed that the application of the TGT learning model was effective in improving students' learning outcomes in analyzing the intrinsic elements of short stories. This improvement can be seen from the increase in students' test scores from cycle 1 to cycle 2. This study suggests that the TGT learning model can be used as an effective alternative learning strategy to improve students' learning outcomes in analyzing the intrinsic elements of short stories. In addition, this study also shows that short stories are the right learning media to develop students' analytical skills.
- Research Article
- 10.21833/ijaas.2024.06.015
- Jun 1, 2024
- International Journal of ADVANCED AND APPLIED SCIENCES
Linear and quadratic functions are crucial topics in math education, and there is a significant focus on using activity-based learning (ABL) to teach these subjects. However, previous research has shown gaps, especially in how this approach affects high school students' learning outcomes. Most studies focus only on test scores and do not consider students' satisfaction or apply these methods to other math topics. This study aimed to assess the impact of ABL on the academic performance and satisfaction of 11th-grade students with linear and quadratic functions. The research was conducted in a Thai public school with 38 participants, using various tools like an ABL curriculum, skills assessments, achievement tests, and a satisfaction survey. The findings clearly showed that ABL improves students' understanding and problem-solving skills in complex math topics like linear and quadratic functions. This study provides solid evidence that ABL is effective in high school math, suggesting it could improve students' overall learning experiences and outcomes.
- Research Article
5
- 10.1002/cl2.186
- Jan 1, 2017
- Campbell Systematic Reviews
The primary purpose of assessment in an instructional setting is twofold: to determine whether learners have achieved the stated objectives and learner outcomes described in the curriculum and to determine whether educators meet those learning objectives in the classroom. Assessment, quite often, is administered in the form of a written or computerized test or exam. An exam is a measurement instrument designed to measure knowledge and understanding of defined content (Gaberson, 2008). Testing is an important activity for the learner as the learner needs to identify how they performed on the test, and whether test results allow them to progress in their program (Sainsbury & Walker, 2008). Testing is also important for the educator. Testing is a means to identify whether teaching is effective and how well the student comprehends the material. Because testing is a “high-stakes” (Hoachlander, 1998) activity in which students must offer their best performance, it can be anxiety provoking and can negatively affect their performance. There are many reasons why students might underperform on tests, including poor performance related to test anxiety, poor study skills, improper test preparation, language difficulties, and the presence of cultural bias through poorly written questions (Lusk & Conklin, 2003). There is little empirical evidence available on the validity of various testing formats, yet educators still rely heavily on traditional individual testing as a method of evaluation or assessment (Halstead, 2007). An additional complicating factor is that all tests have error; without conducting a proper item analysis, poorly constructed or unvalidated tests can inaccurately reflect student knowledge (Gaberson, 2008). Because of the many disadvantages inherent in traditional testing, alternate testing methods have been explored. One of these is collaborative testing, a relatively recent modality that is becoming more frequently utilized as an alternative. The intervention that will be examined in this review is collaborative testing. Collaborative testing is a method in which students work together while taking a written evaluative exam. Collaborative, or cooperative, testing is an umbrella term used to describe a test method different from traditional or individualized testing. Collaborative testing has been utilized in undergraduate, graduate, and post graduate settings. Collaborative testing, which is a student-centered, active learning approach, is also referred to as group testing, double testing, paired testing, cooperative testing, and dyad testing (Centrella-Nigro, 2012). Group testing is the term utilized when a test is administered to more than two students. Generally, in group testing the group consists of three to six students (Centrella-Nigro, 2012; Leight, Saunders, Calkins, & Withers, 2012; Wiggs, 2011). The terms dyad or paired testing are utilized when students are paired with a partner to take an exam (Centrella-Nigro, 2012; Haberyan & Barnett, 2010; Mitchell & Melton, 2003). Double-testing is another procedural method for administering a group test. In double-testing, a student first takes an exam as an individual and then re-takes same exam either with a partner or in a group. Within each of these methods of collaborative test administration, partner and group selection, as well as time allowed to take the test and methods for grading all vary. The variation of methods and procedures for implementing collaborative testing makes comparisons of collaborative testing outcomes difficult. Collaborative testing is a fairly new concept with little research to clearly demonstrate that this testing method improves student learner outcomes and test taking performance in higher education. (Haberyan & Barnett, 2010; Mitchell & Melton, 2003; Sandahl, 2009; Sandahl, 2010). Active learning is a teaching concept that promotes student engagement. It is often reported that a collaborative learning environment provides more effective learning. Collaborative testing can be a component of active learning where a small group of students actively engage in learning through testing. Cooperative and collaborative learning are subsets of active learning in which students work in groups, teams, or pairs to perform complex tasks. This collaboration encourages active engagement in the learning process as group members discuss test questions, debate and problem solve to determine the best answers and communicate supporting reasons for their particular answer. Because collaborative testing is accomplished by groups of students, the literature often refers to collaborative or cooperative testing as group testing (Jensen, Moore & Hatch, 2002; Sandahl, 2010; Vockell, 1994; Wilder, Hamner & Ellison, 2007). Collaborative testing may provide a mechanism that more accurately evaluates students’ knowledge and understanding of course material. Testing, especially high stakes testing, often causes test anxiety for the student. Test anxiety tends to weaken students’ test taking ability and ultimately their overall grade in a course. Anxiety interferes with the students’ ability to recall knowledge about the material being tested consequently leading to poor test performance (Markman, Balik, Bercovitz & Ehrenfeld, 2010). Collaborative testing, where students’ can look to their peers for answers or for verification of their own answers, can decrease pressure and anxiety allowing improved recall of material and improved test scores. (Mitchell & Melton, 2003). Through active peer collaboration during testing, both high and low-performing students can complement each other's knowledge allowing the combined effort to improve the group's understanding and application of course material (Lusk & Conklin, 2003). Collaboration during testing also alters the concept of competition among the students, encouraging them to work as a team supporting each other's success. The idea that a higher performing student is depending on a lower performing student to contribute to each other's success, may positively impact the lower performing student's attitude about his or her own abilities (Bransford, Brown & Cocking, 2000). Positive self-concepts and test scores ultimately lead a student to improved knowledge and desire to learn and be successful (Phillips, 1988). Analysis of collaborative testing research is difficult for many reasons. Collaborative testing research focuses on a variety of outcomes, including student satisfaction, anxiety, course and end of semester grades, retention of short-term and long-term knowledge, and effects on teamwork. Furthermore, studies reveal multiple methods and procedures for administering and scoring tests that are taken collaboratively. Examples of these variations include differences in group size and group makeup, timing of tests with regards to administration procedure and placement within course, and determination of grades. Collaborative testing research methods also vary widely in the number of students used in the studies, student grade level, type of exam administered (multiple choice, essay, etc.), and type of class or course in which the testing is used. There are debates regarding the advantages and disadvantages of collaborative testing. It has been shown that students perceive that they learn better in collaborative testing and exhibit improved individual test scores (Leight et al., 2012; Sandahl, 2009). Other reported advantages of collaborative testing include, but are not limited to, better critical thinking skills, improved collaboration and team work among peers, enhanced learning and test taking skills, motivation to study material, reduced test anxiety, and improved test taking performance and confidence. Conversely, researchers report that collaborative testing may artificially raise grades for students who otherwise would not pass a course (Jensen et al., 2002; Mitchell & Melton, 2003). Investigators have also demonstrated that collaborative testing does not result in increased learning and retention of course material. (Giuliordori, Lugan, & DiCarlo, 2008; Harton, Richardson, Barreras, Rockloff & Latane, 2002; Leight et al., 2012). Educators describe disadvantages of collaborative testing such as poor effects on student study time and true comprehension of material. Students may not spend the necessary time in individual study if they know that their group may help them to pass the exam. Other disadvantages include lack of student preparation for the group, thus making the overall group score decrease as well as student reports of knowledge insecurity. This insecurity arises from student lack of preparation for the exam, yet passing the exam without truly knowing the material. Internal group conflict is another disadvantage that may negatively impact exam scores as students may be pressured to change originally correct answers. Lastly, long-term retention of material was identified as a disadvantage in that students who tested collaboratively may not ultimately retain the material for a comprehensive individual exam (Lusk & Conklin, 2003; Sandahl, 2009). Several studies have demonstrated that collaborative testing enhanced student learning, validated misconceptions of testing material among peers, and improved test scores. Mitchell and Melton (2003) used collaborative testing for a unit exam on fluid and electrolytes in which students in an Associate Degree nursing program were randomly assigned to groups. The students who volunteered to participate took the exam individually and were allotted 50 minutes for a 50 –item multiple-choice exam. After submitting the individual test, a second test form was given to student groups who were allowed 10 minutes to discuss the questions on the exam and permitted to change their original answers. The second test form permitted an assessment on whether collaboration resulted in a change in the student's original exam grade. Averages of both the individual and group grade were used to determine the final test score. Study authors found through student surveys that collaborative testing helped to validate knowledge and clarify misconceptions. However, they found that collaboration gave unprepared students the opportunity to receive a higher grade than the one they received individually. Haberyan and Barnett (2010) examined the efficacy of collaborative testing in an Educational Psychology class. Students completed the first and final exam individually and had the option of completing the third exam collaboratively. The researchers found that those testing with a partner scored significantly higher (M=9.63, SD 2.20) than those testing alone (M=8.15, SD=2.60). A second part of the study was used to determine effects of exam performance conditioned on whether students studied with a partner. Study authors found that exam performance was improved with collaboration, regardless of whether the student studied alone or with a partner. Sainsbury and Walker (2008) studied group testing with students in a Bachelor of Pharmacy program in which thirty-five percent of the course grade was based on quizzes. In the study, quizzes were administered individually, then to a group. After discussion with their peer partners, students were permitted to either keep their initial answers or change them. Immediately after the quiz was submitted, course faculty allowed students to continue to engage in discussion about the quiz to enhance learning of material. Giuliodori, Lugan and DiCarlo (2008), studied students in a Veterinary Physiology course who took individual tests then answered the same questions in teams of two. Students who tested in a group answered questions correctly 70.2% of the time while students who tested individually responded correctly 58.7% of the time [t = 11.4, df = 137, p < .001, 95% CI: 1.88-2.67]. In a study by Zimbardo, Butler and Wolfe (2003) students in a psychology course were given the option of taking their midterm or final examinations with a partner of their choice. The final grade obtained by the team was considered the common grade. The researchers found that students’ performance on the team exams was better than those who took individual exams yielding means of 73.1 versus 64.6 (t =7.25, df = 202.74, p < .0001). When repeated with students in a comparable course the following term, authors found nearly identical results. Students also reported that they experienced reduced test anxiety, that the collaboration gave them practice with negotiating differences with answers, and that the experience enhanced learning and comprehension of material. Several studies have demonstrated that collaborative testing helps to enhance all student test scores within a course. Wink (2004) studied collaborative testing for two exams in a Health Care Policy course in which students were placed in groups of six to ten. Students took the exam individually, and then again as a group. Scores were computed using an average of the individual and the group results, and demonstrated that student scores on both exams increased along with their final course grades. Researchers discuss grade inflation as a potential barrier to the use of collaborative testing (Wink, 2003; Wilder et al, 2007). To prevent grade inflation of poor performing students, different approaches have been reported by various researchers. Wilder et al. (2007) used an average of the individual and group grade to obtain the students final grade. Students who did not receive a passing grade on the individual exam were permitted to participate in the group test activity, but could not elevate their grade based on the group grade. Jensen et al., (2002), for a second quiz in an Anatomy and Physiology course, had students meet in their group where they were given one copy of the questions and one answer sheet to turn in. The researcher recommended a scoring rubric with points assigned for ranges of correct answers. As an example, a group grade of 8 to 10 out of 10 possible points resulted in 5 points added to their final test score. Their overall mean scores for students were all higher when compared to those taking individual quizzes (t = 20.3, df = 405, p < .0001). Overall, both studies found that cooperative testing produced better performance on exams than individualized exams, but did not greatly impact students’ final grades. Another barrier to the use of collaborative testing is the concern that performance on group exams does not translate to similar performance on comprehensive exams and retention of material. Woody, Woody and Bromley (2008) studied students who participated in an individual retest approximately three weeks after their initial group exam. The students formed groups of three or four in advance of the group examination. They found that students scored higher in a group format test (M=84.1%, SD=8.22) than in the individual format (M=75.5%, SD=12.56); however, collaborative testing did not result in later retention of course material. Leight et al., (2012) determined that the vast majority of students scored higher on group exams than on individual exams. In this study, students were semi-randomly divided into two groups based on their performances on the first exam. The second exam, which included class content from the first exam, was taken individually and then again in small groups of two to four. The research revealed that the positive effect of collaborative testing did not result in increased retention of content. Lastly, Sandahl (2010) randomly assigned students to groups of three or four, in which two exams were administered to a group, and two to individuals. Students in the group exam were not required to reach consensus on the answers. The researcher found that students who took the exams collaboratively scored significantly higher than those who took the exam individually (p < .05). However, there was no significant effect found on retention of material (Lamda (12, 262) = .921, p=.526). This review intends to determine whether collaborative testing in the classroom setting produces positive learner outcomes in the higher education setting. Not only will this review identify whether learner performance will improve students’ class grades when measured against their individual grades, but it will also assess student's perception of anxiety during test taking in either individual or collaborative format. The review will also evaluate the students’ evaluations of the collaborative process, their perceptions of confidence with test taking skills, and their overall perceptions of comprehension and retention of material. It is important that classroom teaching and learning continue to evolve. A greater emphasis has been placed on teamwork in the workplace. Collaborative learning may be one mechanism that can enhance the concept of teamwork. This review intends to contribute to the growing literature on collaborative testing by demonstrating its impact on teamwork. As the classroom expands and the learners change, new and creative formats of teaching and learning are also required. Although collaborative testing is a fairly new method of learning, this review will contribute to the growing body of literature and provide relevant information for both faculty practice and student learning. It is the aim of this review to assess the effect of collaborative, group, or double testing on learning outcomes for students in higher education settings. Specifically, this systematic review asks the following research question: What is the effect of collaborative testing on learning outcomes for higher education students? This review will look at the evidence on collaborative testing, defined as instances where two or more students work together to complete an evaluative exam. This review will only include adult higher education students, who are defined as students aged 18 years or older attending post-secondary institutions. We will not exclude studies on the basis of their participants’ socioeconomic status, gender, race or any other demographic variable. All comparative quantitative study designs with at least two time points for measurement of the outcomes—including but not limited to randomized control trials and quasi-experimental studies, pre-post and time series designs with control groups, as well as observational studies with control groups, such as longitudinal prospective or retrospective cohort studies and case control studies—will be included in this review. The use of these study designs in systematic reviews of comparative effectiveness is an acceptable strategy per the IOM (2011) and Campbell (2013) guidelines, particularly in the context of the GRADE (Guyatt, Oxman, Schünemann, Tugwell, & Knottnerus, 2011) approach for evaluating the quality of the evidence. Quantitative study designs that do not include a comparator/control group will be excluded, such as correlational designs, descriptive studies, or single cohort prospective or retrospective studies. Before-after studies in which participants serve as their own control will not be excluded from the analyses. The manner in which the baseline equivalence of the comparator group was established, or whether baseline equivalence was established at all, will not be a consideration in the selection of studies for inclusion. Strategy Development The search strategy will be developed in three stages. First, keywords derived from the study's inclusion criteria will be combined using Boolean operators and used to identify potentially relevant studies using the PsycINFO database (see Appendix I for this list). Citation chaining will also be used to identify additional relevant studies as well as those citing studies comprising the initial subset. Second, indexing of the identified citations will be examined for subject headings and relevant keywords capable of being added to terms of the search. Finally, the revised search strategy will be executed in the individual databases using controlled vocabulary including database-specific subject headings derived from database thesauruses and the revised list of keywords. The assistance of a librarian with experience supporting systematic reviews will be sought in searching published, peer-reviewed literature using validated search strategies for the following electronic databases: MEDLINE, Embase, Academic Search Premier, CINAHL, PsycINFO, ERIC and VET-Bib. In addition, the ProQuest Dissertation & Theses database and a general internet search engine (i.e., Google) will be used to identify literature not formally published in sources such as books or journal articles (i.e., grey literature) using a combination of keywords and outcomes of interest. Pubmed will also be searched for publications ahead of print. Databases will be searched from inception to the present date. Search results will be merged using appropriate systematic review software and duplicate records of the same report will be removed. Our search will not employ hand searching of individual journals. We will however contact investigators to for which only are We will also contact within the with regarding other potentially relevant studies that they may be Finally, the search in the review will be to for to that the recent is and will the and of the studies that are identified from of the search and will determine whether these studies meet the inclusion criteria of this review. these will be first consensus among them with by of a third in The of studies that pass the initial by and will then be examined by two and in to that they meet the inclusion these two authors in this process will be first by or if in with a third The critical provide means of whether or not the study investigators to with these inherent in observational designs, such as the use of as a strategy to for or testing on demographic to identify potentially significant differences the comparator groups. The presence of these strategies will be identified and examined as part of the of the studies that meet the inclusion We will use the GRADE et al., 2011) approach to or the evidence from studies or observational studies, We will not exclude any studies based on on individual of bias or study of of the two will be first through then if in with a third will be using a in which will be tested on of the included studies the two who will be performing the and will be consensus or in with a third The that will will include information Study study post-secondary of type of collaborative testing example, then group then group testing timing or group testing without an individual of the comparator type of of how the is measured type of instrument was used to measure the if information on the effect size for each of and for mean for size in each group, measure of in the study confidence or and studies will be for We will outcomes mean differences in mean grade in anxiety scores measured using validated as or mean differences to allow of studies using different outcomes such as grade inflation as the of students who pass the course who otherwise would not will be as or The of all outcomes will be reported using the 95% confidence We do not to students’ of the collaborative process, or perceptions of comprehension on used within individual studies. these outcomes are will results to a We will also studies methods of testing double testing, dyad testing, and group in the Finally, will be where and for that are not to student is will be to identify among multiple studies. evaluating the outcomes of using methods to be into will either be combined for or using the will not be in the that the majority of studies are to present a high of bias on study that would their will be in using the effects described by & We effects for multiple reasons. To the literature on collaborative testing a of effects on student that the intervention effect described by these studies is not identical but along a Second, (i.e., individual study of the effect to vary by effects for to be to study such as selection method of collaborative testing group subject that a more approach is (i.e., a effects Finally, in the of true the effects is reduced to the We will perform a based on the of bias criteria for each studies. each of bias will exclude studies that are found to be at high of a systematic and will the without these studies in to determine whether that type of of bias has an impact on the of the will be to for (see bias will be if there are at least 10 studies available to in the We will assess bias using in using best 2011). We will not formally assess bias using The will a to contact study authors in to We will whether the that is to be at or whether it is not at but is to bias and studies will be excluded from the on the basis of to the outcomes of interest. will be in which studies with are excluded to the effects on of effect will not be We will employ to the potential impact of the following which are a on the effect size of the mean of the gender, study collaborative testing group and methods of testing multiple choice, answers, We will also use to the impact of each type of systematic bias identified in the studies on the effect size for each studies. will be in will be examined individually as It has been that effects may contribute to collaborative testing performance as learner improve 2009). It that older learners may have had more opportunity to learning strategies than in higher education the impact of learner on effect may and (2007) reported that no and for collaborative testing individual students were more to perceive with peers as lower anxiety and to higher test then students on be more to receive the of the impact of on effect also be of questions may more to student collaboration than and consequently impact both learning and student performance. individual studies the impact of collaborative testing on only a single type of examination. would allow for of different on effect the same it to that course content may be an important and as such will also be adult and and or or Anxiety or or or or or or perception performance or grade or Anxiety or Test scores or average or or inflation or or to on this review will of the by the Group of the Campbell Collaboration and will be completed by
- Research Article
1
- 10.31316/j.derivat.v5i2.136
- Mar 14, 2019
- Jurnal Derivat: Jurnal Matematika dan Pendidikan Matematika
This study aims to determine the effectiveness of the learning model that is influenced by the level of students' initial abilities and the combined influence (interaction effect) between the learning model and the students' initial level of ability to the students' mathematics learning outcomes. The learning model has a central function in learning, namely as a tool and a way to achieve learning goals. The type of research used in this study is quasi-experimental. The research subjects used as the subject of the trial were Yogyakarta State and Private Middle School Students. The object of this research is the students' initial abilities and learning outcomes of mathematics by using Write and Conventional Think Talk learning models. Data collection techniques used are test techniques and documentation techniques. Data were analyzed by two factors, both for initial ability test scores and learning outcome test scores and post anava test using Scheffe Test. The results of the study with α = 5% indicate: 1) The Think Talk Write learning model is more effective than the conventional learning model. (2) Learning outcomes of high-skilled students are better than students who have moderate and low initial abilities, and (3) there is no interaction between learning methods and students' initial level of ability based on learning outcomesKeywords: Effectiveness, Think Talk Write, AnavaAbstractThis study aims to determine the effectiveness of the learning model that is influenced by the level of students' initial abilities and the combined influence (interaction effect) between the learning model and the students' initial level of ability to the students' mathematics learning outcomes. The learning model has a central function in learning, namely as a tool and a way to achieve learning goals. The type of research used in this study is a quasi-experimental. The research subjects used as the subject of the trial were Yogyakarta State and Private Middle School Students. The object of this research is the students' initial abilities and learning outcomes of mathematics by using Write and Conventional Think Talk learning models. Data collection techniques used are test techniques and documentation techniques. Data were analyzed by two factors, both for initial ability test scores and learning outcome test scores and post anava test using Scheffe Test. The results of the study with α = 5% indicate: 1) The Think Talk Write learning model is more effective than the conventional learning model. (2) Learning outcomes of high-skilled students are better than students who have moderate and low initial abilities, and (3) there is no interaction between learning methods and students' initial level of ability based on learning outcomes. Keywords: Effectiveness, Think Talk Write, Anava
- Research Article
14
- 10.1080/00220388.2019.1605055
- May 7, 2019
- The Journal of Development Studies
This paper uses panel data from the Young Lives Survey to examine the effect of the world’s largest public works program and India’s flagship social protection program, the National Rural Employment Guarantee Scheme (NREGS), on children’s learning outcomes such as grade progression, reading comprehension test scores, writing test scores, math test scores, and Peabody Picture Vocabulary Test (PPVT) scores. We find that the program has strong positive effects on these outcomes in both the short-and-medium run. Finally, the impact estimates reported here are robust to a number of econometric concerns such as – program placement, selective attrition, and type I error.
- Research Article
68
- 10.1007/s00345-010-0603-x
- Oct 20, 2010
- World Journal of Urology
Radiation for tumors arising in the pelvis has been utilized for over a 100years. Adverse effects (AEs) of radiotherapy (RT) continue to accumulate with time and are reported to show decades after treatment. The benefit of RT for pelvic tumors is well described as is their acute AEs. Late AEs are less well described. The burden of treatment for the late AEs is large given the high utilization of RT. For prostate cancer, 37% of patients will receive radiation during the first 6months after diagnosis. Low-and high-grade AEs are reported to occur in 20-43 and 5-13%, respectively, with a median follow-up of ~60months. For bladder cancer, the grade 2 and grade 3 late AEs occur in 18-27 and 6-17% with a median follow-up of 29-76months. For cervical cancer, the risk of low-grade AEs following radiation can be as high as 28%. High-grade AEs occur in about 8% at 3years and 14.4% at 20years or ~0.34% per year. Radiation AEs appear to be less common or at least less well studied after radiation for rectal and endometrial cancers. Properly delineating the rate of long-term AEs after pelvic RT is instrumental to counseling patients about their options for cancer treatment. Further studies are needed that are powered to specifically evaluate long-term AEs.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.