Abstract

Developing and administering parallel test forms to students in higher education offsets the cost of having assessment scores that have low validity. This research demonstrated the validity and equivalence of parallel tests in a Basic Statistics course. Among other things, the study: (1) established and compared the item specifications of the items on the different test forms developed, and (2) determined the extent of parallelism of the alternate test forms. Three carefully designed alternate forms of achievement tests (using item specification and test specification table) were administered to 504 second-year students. In addition, academic resilience scale was administered to the same students to help ascertain the criterion validity of the alternate forms. The study revealed some level of similarities in the statistical specifications of the alternate test forms. Further analysis showed that the three alternate test forms developed were congeneric forms of parallelism. The authors concluded that developing classical parallel forms of the test is not feasible, but having congeneric parallel test forms offset the cost of having less valid scores which do not represent students’ attainment levels. Faculty members are encouraged to make use of parallel test forms in assessing students in higher education.

Highlights

  • Higher education institutions, in Ghana and beyond, have placed significant importance on designing policies and guiding principles to govern their assessment processes

  • This paper presents a systematic process of how the test forms were designed and administered as well as the data analysis procedure

  • That is to say that the means, variances, covariance among the test forms, and the covariance of the test forms with the psychological test, were all not the same. This suggests that the alternate forms we developed are a congeneric form of parallel test, which is a less restrictive form of parallel test that needs the true score and test content to be equivalent

Read more

Summary

Introduction

In Ghana and beyond, have placed significant importance on designing policies and guiding principles to govern their assessment processes. Validity evidence is drawn from non-mutually exclusive five dimensions to provide accuracy in the inferences made. They include data management, curriculum content, correlational analyses, statistical analyses of test data and assessment effects (Downing & Haladyna, 2009; Kane, 2006). There is, a greater need to use several sources of data to support the soundness of the interpretation and use of assessment results. This is due to the dynamic and complex nature of assessments, and the increasing stakes of assessments (Kane, 2006)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call