Abstract

ABSTRACTA review of the literature concerned with validity data and policies for various methods of treating multiple scores is reported, as are analyses of data from the College Board Validity Study Service. The analyses evaluated the use of SAT‐V alone, SAT‐M alone, and the use of both in combination. The methods for treating multiple scores that were studied were to use the score from the administration with the highest V + M, the highest score, the most recent score, and the average. Best results, in terms of the highest average validity, were achieved using average V + M. Also evaluated were weighted combinations where all scores were used, but the highest or most recent received a unique, empirically determined weight. The use of the empirically determined weights was not superior in cross‐validation.The combinations of variables, and the four methods of treating multiple scores that did not involve the determination of empirical weights, were evaluated in regression equations developed using one‐time testers. All treatments of multiple scores resulted in underprediction of actual grades, with the highest score providing the least amount of underprediction. However, the discrepancy between predicted and actual grades varied greatly across institutions.Data from the Student Descriptive Questionnaire (SDQ) were cross tabulated with the number of retests. It was found that first test score‐level, income, and race were all related to the frequency of retesting.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call