Abstract

The combination of different item formats is found quite often in large scale assessments, and analyses on the dimensionality often indicate multi-dimensionality of tests regarding the task format. In ICILS 2013, three different item types (information-based response tasks, simulation tasks, and authoring tasks) were used to measure computer and information literacy in order to balance technological and information-related aspects of computer and information literacy. The item types differ in the cognitive processes and the type of knowledge they measure and in the strands and aspects of the ICILS 2013 framework they address. In this article, we explored which factor models that assume item type factors or type of knowledge factors fit the data. For the factors of the best fitting models, regression analyses on SES, frequency of computer use, self-efficacy, and gender were computed to work out the different meanings and the convergent and discriminant validity of the factors. The results show that three-dimensional models with correlated factors for item type or type of knowledge fit best. Regression analyses discover substantive implications of between-item and within-item models. The effects are discussed and an outlook is given.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call