Abstract

BackgroundEducational experts commonly agree that tailor-made guidance is the most efficient way to foster the learning and developmental process of learners. Diagnostic assessments using cognitive diagnostic models (CDMs) have the potential to provide individual profiles of learners’ strengths and weaknesses on a fine-grained level that can enable educators to assess the current position of learners. However, to obtain this necessary information a strong connection has to be made between cognition (the intended competence), observation (the observed learners’ responses while solving the tasks), and interpretation (the inferences made based on the observed responses of learners’ underlying competencies). To secure this stringent evidence-based reasoning, a principled framework for designing a technology-based diagnostic assessment is required—such as the evidence-centred game design (ECgD).AimWith regard to a diagnostic assessment, three aspects are of particular importance according to the ECgD approach: (I) the selection of a measurable set of competence facets (so-called skills) and their grain-size, (II) the constructed pool of skill-based tasks, and (III) the clear and valid specified task to skill assignments expressed within the so-called Q matrix. The Q matrix represents the a priori assumption for running the statistical CDM-procedure for identifying learners’ individual competence/skill profiles. These three prerequisites are not simply set by researchers’ definition nor by experts’ common sense. Rather, they require their own separate empirical studies. Hence, the focus of this paper is to evaluate the appropriateness and coherence of these three aspects (I: skill, II: tasks, and III: Q matrix). This study is a spin-off project based on the results of the governmental ASCOT research initiative on visualizing apprentices’ work-related competencies for a large-scale assessment—in particular, the intrapreneurship competence of industrial clerks. With the development of a CDM I go beyond the IRT-scaling offering the prerequisites for identifying individuals’ skill profiles as a point of departure for an informative individual feedback and guidance to enhance students’ learning processes.MethodsTherefore, I shall use a triangulated approach to generate three empirically based Q matrix models from different sources (experts and target-group respondents), inquiry methods (expert ratings and think-aloud studies), and methods of analyses (frequency counts and a solver–non-solver comparison). Consequently, the four single Q matrix models (researchers’ Q matrix generated within the task construction process and the three empirically based Q matrix models) were additionally matched by different degrees of overlap for balancing the strengths and weaknesses of each source and method. By matching the patterns of the four single Q matrix models, the appropriateness of the set of intrapreneurship skills (I) and the pool of intrapreneurship tasks (II) were investigated. To identify and validate a reasonable proxy for the task to skill assignments for selecting the best fitting Q matrix model (III), the single as well as the matched Q matrix models where empirically contrasted against N = 919 apprentices’ responses won and scaled up within the ASCOT-project using psychometric procedures of cognitive diagnostic within the DINA (Haertel in J Educ Meas 26:301–323, 1989) model.ResultsThe pattern matching resulted in a set of seven skills and 24 tasks. The appropriateness of these results was emphasized by model fit values of the different Q matrix models. They show acceptable up to good sizes (SRMSR between .053 and .055). The best fitting model is a matched Q matrix of which the match is not that strict or smooth with regard to the degree of overlap.ConclusionsThe study provides a principled design for a technology-based diagnostic assessment. The systematic and extensive validation process offers empirical evidence for (I) the relevance and importance of the specified intrapreneurship skills, (II) tasks prompting the intended skills, and (III) the sophisticated proxy of real cognitive processes (in terms of the Q matrix), but also give hints for revision. This—within a diagnostic assessment—preliminary work aims at identifying the best-fitting Q matrix to enable the next step of depicting learners’ individual strengths and weaknesses on a sound basis.

Highlights

  • Educational experts commonly agree that tailor-made guidance is the most efficient way to foster the learning and developmental process of learners

  • The reduction of the initial set of fourteen skills to seven skills caused a balanced consideration of (i) how often a skill was represented by the tasks, (ii) how often a skill was part of a skill pair in tasks, and (iii) how important the skill was with regard to content validity aspects

  • The results prompted a revision of skills using five different actions: (1) maintaining a skill as initially introduced, (2) eliminating a skill that is hard to depict, (3) constructing additional tasks to enhance poorly represented skills, (4) integrating a skill with another if the particular skill is usually only needed with another skill, but not the other way around, or (5) merging skills which often require each other into one new skill. These revisions go in line with suggestions of cognitive diagnostic models (CDMs) experts to specify not more than 7–10 skills to yield in best results

Read more

Summary

Introduction

Educational experts commonly agree that tailor-made guidance is the most efficient way to foster the learning and developmental process of learners. Diagnostic assessments using cognitive diagnostic models (CDMs) have the potential to provide individual profiles of learners’ strengths and weaknesses on a fine-grained level that can enable educators to assess the current position of learners. To obtain this necessary information a strong connection has to be made between cognition (the intended competence), observation (the observed learners’ responses while solving the tasks), and interpretation (the inferences made based on the observed responses of learners’ underlying competencies). Some other educators are applying item response theory (IRT) to obtain information about learners’ abilities corresponding to item difficulty This information enables the educator to keep teaching and training on that particular task that has been identified as difficult. The educator can work further with clearly identified groups on particular achievement levels and collections of tasks with the corresponding difficulty in order to strengthen the ability of the students on that particular achievement level and to guide them to the performance level

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.