Abstract
The purpose of our paper entitled Hierarchical Diagnostic Classification Models: A Family of Models for Estimating and Testing Attribute Hierarchies (Templin & Bradshaw, 2014) was two-fold: to create a psychometric model and framework that would enable attribute hierarchies to be parameterized as dependent binary latent traits, and to formulate an empirically driven hypothesis test for the purpose of falsifying proposed attribute hierarchies. The methodological contributions of this paper were motivated by a curious result in the analysis of a real data set using the log-linear cognitive diagnosis model, or LCDM (Henson, Templin, & Willse, 2009). In the analysis of the Examination for Certification of Proficiency in English (ECPE; Templin & Hoffman, 2013), results indicated that few, if any, examinees were classified into four of the possible eight attribute profiles that are hypothesized in the LCDM for a test of three binary latent attributes. Further, when considering the four profiles lacking examinees, it appeared that some attributes must be mastered before others, suggesting what is commonly called an attribute hierarchy (e.g., Leighton, Gierl, & Hunka, 2004). Although the data analysis alerted us to the notion that such a data structure might be present, we lacked the methodological tools to falsify the presence of such an attribute hierarchy. As such, we developed the Hierarchical Diagnostic Classification Model, or HDCM, in an attempt to fill the need for such tools. We note that the driving force behind the HDCM is one of seeking a simpler, or more parsimonious, solution when model data misfit is either evident from LCDM results or implied by the hypothesized theories underlying the assessed constructs. As a consequence of the ECPE data results, we worked to develop a more broadly defined set of models that would allow for empirical evaluation of hypothesized attribute hierarchies. We felt our work was timely, as a number of methods, both new and old, are now using implied attribute hierarchies to assess examinees in many large scale analyses—from so-called intelligent tutoring systems (e.g., Cen, Koedinger, & Junker, 2006) to large scale state assessment systems for alternative assessments using instructionally imbedded items (e.g. the Dynamic Learning Maps Alternate Assessment System Consortium Grant, 2010–2015). Moreover, such large scale analyses are based on tremendously large data sets, many of which simply cannot fit with the types of (mainly unidimensional) models often used in current large scale testing situations. Furthermore, newly developed standards in education have incorporated ideas of learning progressions which indirectly imply the existence of hierarchically structured attributes (e.g., Progressions for the Common Core State Standards in Mathematics, Common Core State Standards Writing Team, 2012). In short, the current and
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.