Abstract

The development and evaluation of eLearning approaches is a global trend in higher education today. This study aimed to develop a companion evaluation toolkit consisting of formative and summative assessment scales to evaluate academics’ experiences in designing, delivering, and evaluating eLearning. To test the psychometric properties of the companion evaluation toolkit, an instrument validation study was conducted. Items were created, then tested for content and face validity. A confirmatory factor analysis (n = 185 participants) for the summative assessment scale examined the underlying structure of the scale, while reliability was assessed using the Cronbach’s alpha coefficient. The results show that the model examined is consistent with the 3 factors (33 items) explaining a total of 62% of the variance. The results also show a high level of reliability for both the formative and summative scales that comprise the companion evaluation toolkit. The results of this study can be used and welcomed by both teachers and professionals involved in the development and use of learning management systems or in the design, delivery, and evaluation of the eLearning process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call