Do Predictive Inferences Made from Admissions Test Scores Vary by Amount and Type of Test Preparation?
ABSTRACT Standardized test scores play a significant role in college admission, it is thus crucial to examine factors influencing their fairness. An underexamined issue is the association between participation in test preparation programs and test scores’ predictive validity of academic performance. Very few studies have explored this issue, and none of them was conducted in a field setting using actual academic outcomes. To fill the gap, we analyzed how test preparation is related to the association between ACT scores and student performance in a sample of around 1,000 students in an introductory psychology class. Although our results showed that ACT scores were more predictive of two course elements for students who received more coaching, the effects were trivial in magnitudes and practical significance. Overall, we conclude that involvement in admission test preparation programs is not strongly associated with the fairness of using these scores in admission decisions.
- Research Article
4
- 10.7709/jnegroeducation.83.1.0001
- Jan 1, 2014
- The Journal of Negro Education
Major changes are coming to the SAT. Both the SAT and the ACT are used to influence admissions and placement at colleges and universities in the U.S. In 2016, the SAT will return to a 1600-point scale from 2400, eliminate antiquated vocabulary words and assess students' understanding of context rather than rote memorization. The essay section will also be optional. In addition, the test will no longer penalize students for wrong answers, and the readingcomprehension section will incorporate subjects that students typically learn in high school and middle school (College Board, 2014).Throughout the history of the SAT and ACT, Black students' average scores have been the lowest among all racial groups. Currently, the national average for Black students on the ACT is 17 (ACT, 2012), compared with 22 for White students, and the national average for Black students on the SAT is 860 (Jaschik, 2013), compared with 1061 for White students. Black students' scores on the SAT and ACT have been relatively flat for the last 20 years, although significant gains have been made in Black students' graduation rates and college-degree attainment.The disparity in those numbers raises questions about the significance of the SAT in predicting long-term college success for African Americans-or any student, for that matter. Reasons for lower standardized test scores among Black students have been debated in the academic literature as well as in public discourse. Some question the validity and reliability of the tests, while others assert that the systemic impact of racial oppression and poverty diminishes Black students' performance on the tests.Other more extreme explanations purport that Black students' performance is diminished because of natural cognitive deficits or corrupted cultural values. However, as Black families and the Black community have sought to reconcile low test scores, test manufacturers have been grappling with research suggesting that the ACT and SAT do predict college success.The National Association for College Admission Counseling (NACAC) recently released research (Hiss & Franks, 2014) that revealed no significant differences in cumulative GPA or graduation rates between students who submit test scores for college admission and those who opt out of using scores for admission. In addition, the study found that high school GPAs correlated highly with college GPAs, regardless of SAT or ACT scores. In other words, students with low high school GPAs and high SAT or ACT scores generally performed poorly in college, and students with strong high school GPAs and low SAT or ACT scores generally performed well in college. The total sample of the study was almost 123,000 students across 33 diverse institutions.Some of the proposed changes to the SAT are aimed at addressing a known achievement gap that could be a proxy for race or socioeconomic status-the gap between students who participate in test prep and those who don't. Currently, test-preparation materials began at $25, and test-preparation courses and tutoring cost up to $6,600. More-affluent families spend more money to train their children to take the test, which often involves skills that have little to do with crystallizing the knowledge they should have gained in high school. The significant gains in SAT and ACT scores achieved by the students, who participate in the more expensive test programs as reported by the test-prep companies, call into question the integrity of the tests.Whether changes to the SAT will make scores more predictive of college performance and reduce affluent families' ability to game the test will be known until years after changes are implemented. However, the proposed changes will do little to mitigate the widespread use and misuse of the SAT or ACT as an admissions criterion. NACAC's Statement of Principles of Good Practice (NACAC, 2013) explicitly states that universities should not use minimum test scores as the sole criterion for admission, advising or for the awarding of financial aid. …
- Conference Article
1
- 10.18260/1-2--20296
- Sep 4, 2020
Developing the Academic Performance-Commitment Matrix: How Measures of Objective Academic Performance can do more than Predict College Success
- Research Article
43
- 10.1097/00001888-200010001-00013
- Oct 1, 2000
- Academic Medicine
RAJ A. THADANI, DAVID B. SWANSON, and ROBERT M. GALBRAITHPrior to taking the United States Medical Licensing Examination(USMLE) Step 1, medical students commonly spend large amountsof time studying on their own or in groups. They also may partic-ipate in test-preparation activities offered by their schools. In ad-dition, there is anecdotal evidence indicating that medical studentsincreasingly purchase ‘‘board prep’’ publications and sign up forcommercial coaching courses that may last several weeks and costthousands of dollars.The effectiveness of alternate approaches to preparing for Step1 is unknown, though research on other high-stakes exams suggeststhat exam performance may be improved by coaching courses.
- Research Article
17
- 10.1016/j.intell.2016.01.004
- Jan 26, 2016
- Intelligence
Do individual differences in test preparation compromise the measurement fairness of admission tests?
- Conference Article
3
- 10.2991/icemss-14.2014.14
- Jan 1, 2014
This study concentrates on international high school development and trends in Chongqing, China. It was discovered that the number of students who are studying abroad has been growing significantly and the majority of students struggled because of the language barrier during their first semester. Although students who took language proficiency assessments were accepted to college/universities, many still struggled in a foreign academic environment. This study contains data from urban and suburban areas of China. Specifically, students' Internet Based Test of English as a Foreign Language Internet-Based Test® (TOEFL) and International English Language Testing System™ (IELTS) scores from urban schools have been collected from the test preparation program, Apluz International Language Center (Apluz). The academic performance of students who were accepted into universities abroad were monitored during their first semester. TOEFL and GAC Entry Data has been collected from two Global Assessment Certificate™ (GAC) centers including: Chongqing Bachuan International High School (CBIHS) located in Tongliang, Chongqing, and a center in Jiangsu Province (Center A). ACT Education Solutions, Limited also offered data from a case study which included 30 students' GAC and university final GPAs. The scores were used to compare English competencies from a test preparation program with the holistic approach offered in the GAC Program, which promotes language acquisition as the the World-Class Instructional Design and Assessment™ (WIDA) standards suggest. Although many students choose to study examinations, it was determined that students tend to perform better when they are in an international setting and the pedagogical focus is based on skills & competencies. Index Terms - International High School, Test Preparation, University Preparation, TOEFL, IELTS 1. Purpose of the Study 1. Comparing an international test preparation system with an international university preparation system. 2. Comparing the effectiveness of international high schools/departments versus traditional Chinese high schools in regards to success rates in international higher education. 2. Curriculum Comparison
- Research Article
87
- 10.1080/08957347.2013.765433
- Apr 1, 2013
- Applied Measurement in Education
Correlational evidence suggests that high school GPA is better than admission test scores in predicting first-year college GPA, although test scores have incremental predictive validity. The usefulness of a selection variable in making admission decisions depends in part on its predictive validity, but also on institutions’ selectivity and definition of success. Analyses of data from 192 institutions suggest that high school GPA is more useful than admission test scores in situations involving low selectivity in admissions and minimal to average academic performance in college. In contrast, test scores are more useful than high school GPA in situations involving high selectivity and high academic performance. In nearly all contexts, test scores have incremental usefulness beyond high school GPA. Moreover, high school GPA by test score interactions are important in predicting academic success.
- Research Article
2
- 10.37134/jrpptte.vol10.2.1.2020
- Nov 9, 2020
- Journal Of Research, Policy & Practice of Teachers & Teacher Education
This predictive study explored the influence of three admission variables on the college grade point average (CGPA), and licensure examination ratings of the 2015 teacher education graduates in a state-run university in Northern Philippines. The admission variables were high school grade point average (HSGPA), admission test (IQ) scores, and standardized test (General Scholastic Aptitude - GSA) scores. The participants were from two degree programs – Bachelor in Elementary Education (BEE) and Bachelor in Secondary education (BSE). The results showed that the graduates’ overall HSGPA were in the proficient level, while their admission and standardized test scores were average. Meanwhile, their mean licensure examination ratings were satisfactory, with high (BEE – 80.29%) and very high (BSE – 93.33%) passing rates. In both degree programs, all entry variables were significantly correlated and linearly associated with the CGPAs and licensure examination ratings of the participants. These entry variables were also linearly associated with the specific area GPAs and licensure ratings, except in the specialization area (for BSE). Finally, in both degrees, CGPA and licensure examination ratings were best predicted by HSGPA and standardized test scores, respectively. The implications of these findings on admission policies are herein discussed.
- Research Article
13
- 10.7764/pel.49.2.2012.3
- Oct 15, 2012
- Pensamiento Educativo: Revista de Investigación Educacional Latinoamericana
Although test scores are widely used in college admissions in the United States, their use is the subject of ongoing debate, partly because of the association between test performance and socioeconomic status (SES). Although test critics have argued that this association is due to the particular content of admissions tests or to the differential availability of coaching, large socioeconomic effects are also found in assessments that are tied to school achievement and for which coaching is not available, such as the National Assessment of Educational Progress and in other academic measures. Some commentators have argued, however, that high school grade-point average has a smaller correlation with SES than admissions test scores and is therefore a superior admissions criterion. In this paper I examine the association between SES and test scores, as well as the association between SES and high school grades, and discuss the relevance of this complex web of associations to college admissions research. While the perennial finding that socioeconomic inequities manifest themselves as educational inequities is disheartening, the analysis of performance differences can point the way toward possible remedies.
- Research Article
5
- 10.1111/emip.12199
- Apr 25, 2018
- Educational Measurement: Issues and Practice
The percentage of students retaking college admissions tests is rising. Researchers and college admissions offices currently use a variety of methods for summarizing these multiple scores. Testing organizations such as ACT and the College Board, interested in validity evidence like correlations with first‐year grade point average (FYGPA), often use the most recent test score available. In contrast, institutions report using a variety of composite scoring methods for applicants with multiple test records, including averaging and taking the maximum subtest score across test occasions (“superscoring”). We compare four scoring methods on two criteria. First, we compare correlations between scores and FYGPA by scoring method. We find them similar (). Second, we compare the extent to which test scores differentially predict FYGPA by scoring method and number of retakes. We find that retakes account for additional variance beyond standardized achievement and positively predict FYGPA across all scoring methods. Superscoring minimizes this differential prediction—although it may seem that superscoring should inflate scores across retakes, this inflation is “true” in that it accounts for the positive effects of retaking for predicting FYGPA. Future research should identity factors related to retesting and consider how they should be used in college admissions.
- Research Article
- 10.1353/rhe.2019.0117
- Jan 1, 2019
- The Review of Higher Education
Reviewed by: Measuring Success: Testing, Grades, and the Future of College Admissions ed. by Jack Buckley, Lynn Letukas, and Ben Wildavsky Ashley B. Clayton and Jenifer F. Godfrey Jack Buckley, Lynn Letukas, and Ben Wildavsky (Editors). Measuring Success: Testing, Grades, and the Future of College Admissions. Baltimore, MD: John Hopkins University Press, 2018. 344 pp. Hardcover: $49.95. ISBN 9781421424965 The value and proper role of standardized tests in American higher education admissions processes has been vigorously debated for decades. Proponents and critics have disagreed about the use and fairness of the two most popular college admissions tests: the ACT and SAT. To further the discussion of standardized admissions testing, Jack Buckley, Lynn Letukas, and Ben Wildavsky edited Measuring Success: Testing, Grades, and the Future of College Admission, as a result of their shared "frustration with the fragmented and incomplete state of the literature around the contemporary debate on college admissions testing" (p. 2). In the Introduction chapter, "The Emergence of Standardized Testing and the Rise of Test-Optional Admissions," the editors give a clear overview of the history of standardized admissions testing, the arguments for and against testing, and the growing test-optional movement. The book consists of three parts made up of eleven chapters authored primarily by researchers, but the practitioner point of view is represented as well. Part 1 (Chapters 1–5), "Making the Case for Standardized Testing," challenges many of the common arguments made in opposition to relying on standardized tests in college admissions. Part 2 (Chapters 6–8), "The Rise of Test-Optional Admissions," narrows the focus of the book to a small subpopulation of colleges utilizing test-optional admissions to illustrate the value of standardized tests in the college admissions process. Lastly, Part 3 (Chapters 9–11), "Contemporary Challenges for College Admissions," focuses primarily on whether test-optional policies yield larger applicant pools and more diverse classes or merely increase college rankings. With the contributions of leading experts in the field, Buckley et al. "sought to foster serious and robust empirical debate about the proper role of standardized admissions testing through rigorous methodological approaches" (p. 2). In Chapter 1, "Eight Myths about Standardized Admissions Testing," Paul R. Sackett and Nathan R. Kuncel present eight commonly cited criticisms of standardized testing that they label as "myths" and their arguments aimed at debunking each. It is evident from the intentional use of the word "myth" in the first chapter, that despite widely held public criticism of standardized testing, the authors believe that these claims are not substantiated. The eight myths cover common critiques of standardized testing such as class, gender, and racial bias, to the claim that standardized tests are highly correlated to socioeconomic status (parental education and family income). The authors dispute each of the eight myths and ultimately conclude that standardized tests are a valuable component in the admissions process. Specifically, they argue that the use of standardized tests "in conjunction with other valid predictors of academic achievement (e.g., high school grades) results in a clearer picture of academic preparation and improves the quality of admissions decisions" (p. 35). In Chapter 2, "The Core Case for Testing: The State of Our Research Knowledge," Emily J. Shaw unpacks the simple truth about admissions tests: [End Page E-17] they provide a common benchmark to compare applicants from a vast sea of educational institutions of varying and largely unknown rigor. Shaw recounts numerous studies that unfailingly conclude that admissions tests are strong and reliable predictors of college performance that become even stronger when coupled with high school grade point average (HSGPA). In addition, the author explains statistical missteps, such as relying on uncorrected correlations or failing to acknowledge multicollinearity, that often lead to contrary conclusions. The authors of Chapter 3, "Grade Inflation and the Role of Standardized Testing," move toward addressing a concern highlighted in Chapter 2: the lack of a standardized assessment of high school performance. Michael Hurwitz and Jason Lee present decades of data revealing widespread high school grade inflation and that HSGPA is higher than ever before. The authors explain that grade inflation is problematic because it reduces the variability in HSGPA, making it harder to distinguish between students' academic preparedness on...
- Research Article
5
- 10.1111/emip.12173
- Oct 30, 2017
- Educational Measurement: Issues and Practice
Most studies predicting college performance from high‐school grade point average (HSGPA) and college admissions test scores use single‐level regression models that conflate relationships within and between high schools. Because grading standards vary among high schools, these relationships are likely to differ within and between schools. We used two‐level regression models to predict freshman grade point average from HSGPA and scores on both college admissions and state tests. When HSGPA and scores are considered together, HSGPA predicts more strongly within high schools than between, as expected in the light of variations in grading standards. In contrast, test scores, particularly mathematics scores, predict more strongly between schools than within. Within‐school variation in mathematics scores has no net predictive value, but between‐school variation is substantially predictive. Whereas other studies have shown that adding test scores to HSGPA yields only a minor improvement in aggregate prediction, our findings suggest that a potentially more important effect of admissions tests is statistical moderation, that is, partially offsetting differences in grading standards across high schools.
- Research Article
8
- 10.1037/1076-8971.6.1.56
- Mar 1, 2000
- Psychology, Public Policy, and Law
Selection tests, such as those used for college admissions, present multiple dilemmas for psychometricians, who grapple with intractable problems in measurement, and the lay public, whose lives are affected by test scores they often do not understand or trust. Criterion studies utilize convenient criteria that have little meaningful significance, such as grades, and the validity coefficients are necessarily low because of range restrictions and the low reliability of grades. Both supporters and detractors of college admissions tests are correct in their assessment: College admissions tests can account for only a small percentage of the variance in success in school and in life, but even a small reduction of variance substantially reduces uncertainty and improves admissions decisions. Admissions testing can and should be improved. Numerous suggestions for improving admissions testing are presented, including ways to reduce group differences without sacrificing construct validity. The effect of the suggested changes on predictive validity is an empirical question and a question about fairness and values.
- Research Article
3
- 10.1037//1076-8971.6.1.56
- Jan 1, 2000
- Psychology, Public Policy, and Law
Selection tests, such as those used for college admissions, present multiple dilemmas for psychometricians, who grapple with intractable problems in measurement, and the lay public, whose lives are affected by test scores they often do not understand or trust. Criterion studies utilize convenient criteria that have little meaningful significance, such as grades, and the validity coefficients are necessarily low because of range restrictions and the low reliability of grades. Both supporters and detractors of college admissions tests are correct in their assessment: College admissions tests can account for only a small percentage of the variance in success in school and in life, but even a small reduction of variance substantially reduces uncertainty and improves admissions decisions. Admissions testing can and should be improved. Numerous suggestions for improving admissions testing are presented, including ways to reduce group differences without sacrificing construct validity. The effect of the suggested changes on predictive validity is an empirical question and a question about fairness and values.
- Research Article
1
- 10.1111/eje.12993
- Jan 29, 2024
- European Journal of Dental Education
The purpose of this study was to explore the students' perceptions and performance in prosthodontics theory exam. A cross-sectional descriptive study was conducted on 560 (80.82%) students of different levels (third, fourth and fifth years) to explore their opinions and performance with regard to a number of issues on a prosthodontics theory exam (exam evaluation, exam preparation, exam material, exam timing). Demographic data were also collected. Descriptive statistics were generated and Chi-square test, independent sample t-test, ANOVA test and Pearson's correlation coefficient were used to examine the associations between different variables. The significance level was set at p < .05. Students' responses regarding exam evaluation was influenced by their gender, study level, high-school Grade Point Average (GPA) and undergraduate cumulative GPA. Perceived exam difficulty was significantly affected by gender (p = .03) and study level (p < .001), and negatively correlated to both high-school GPA (p < .001) and university GPA (p = .03). The vast majority (88.2%) depended on lecture hand-outs and lecture notes for study. Exam material and preparation were not significantly affected by any of the demographic variables with most respondents (76.8%) thinking that the lectures blended with prosthodontics laboratories/clinics would improve their understanding of the exam material. The suggested best time to conduct the exam was early afternoon (31.6%). Student performance was significantly affected by the study level (p < .001) and cumulative GPA (p < .001) with significant positive correlation between the high-school GPA and the mark in the exam (r = .29, p < .001) and by the amount of time students spent for exam preparation (p < .001). Those students who reported using textbooks to prepare for the exam got significantly higher marks (66.1 ± 8.7) compared to the students who did not (62.8 ± 9.7) (p = .03). Course level, GPA and gender were identified as the most influential factors in different aspects of exam evaluation and students' performance. Regular study and use of textbooks were demonstrated to improve academic performance. Additional orientation and guidance relating to the exam (especially for third year students) would be welcomed, as would alternate teaching methods such as small group discussions or study groups.
- Research Article
4
- 10.1016/j.stueduc.2021.101015
- Apr 10, 2021
- Studies in Educational Evaluation
Processes and effects of test preparation for writing tasks in a high-stakes admission test in China: Implications for test takers
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.