Abstract

Educational Measurement: Issues and PracticeVolume 37, Issue 1 p. 3-3 Data VisualizationFree Access On the Cover First published: 25 March 2018 https://doi.org/10.1111/emip.12187AboutSectionsPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onFacebookTwitterLinkedInRedditWechat Featured on the cover is the final winning submission from the 2017 EM:IP Cover Graphic/Data Visualization competition. It is entitled Item Parameter Drift Plot for a Computer Adaptive Test and it was submitted by Yu-Feng Chang at the Minnesota Department of Education and her colleagues from Pearson, Changjiang Wang, Kevin J. Cappaert, and Gerald W. Griph. An initial glance at this graphic makes it readily apparent whether a given item is experiencing drift, as evidenced by a gap between the observed and expected item characteristic curves. But this is not what sets this graphic apart and qualified it as a winning submission. Rather, it was the way it streamlines information regarding item parameter drift, demonstrating its potential utility as a tool for psychometricians and educational measurement specialists. Often, when tasked with creating a data visualization, we strive for a visual or graphic where less is more, while at the same time meeting the goal of presenting data in a manner that readily conveys the purpose of the visual to the audience. It is also the case that good visuals also serve as useful tools for psychometricians (and other researchers)—helping to inform their test-related decisions. Many researchers report that they understand the data more fully and are better able to make decisions when all the relevant information is laid out in front of them. This is exactly what this graphic does. For a better understanding of this graphic, Chang and colleagues provided the following description: Item parameter drift (IPD) analysis is a key process to ensure items in a testing program perform equivalently across different administrations. Oftentimes, psychometricians use visual inspection of the drift plots in addition to various statistical criteria to evaluate the amount of drift in a test item. In a computer adaptive test (CAT) items are administered adaptively according to students’ abilities so the items are administered to a more homogeneous group of students of which the number of item administrations vary.The plots for two 3-PL items are presented, one for an item with minimal drift (left) and the other for an item with considerable drift (right). In each plot the observed and expected item characteristic curves (ICC) are presented. The observed ICC, in red, is obtained using the Stone (2000) pseudo-count approach while the expected ICC, in green, is generated using the item parameters. In addition to the two ICCs, circles are plotted along the observed ICC to indicate the number of students who take the test item at each ability level, which allows the psychometrician to evaluate the impact of the IPD on students who are administered a given item.Multiple sources of auxiliary information are provided on the top panel. These include the item type (e.g., multiple choice), the year the item was first field tested (i.e., age of the item), the item parameters, number of students taking the item, and the historical item exposure rates. In the lower right corner of the plots, information from statistical IPD analyses is presented, which includes the robust Z, d-squared, and the chi-square values. The items flagged by statistical criteria are asterisked (see the right plot). The plots, as presented here, are generated for the 3-PL IRT model but similar plots can also be generated for other IRT models. If you are interested in learning more about this informative data visualization, contact the principal author Yu-Feng Chang (yu-feng.chang@state.mn.us). We want to hear your feedback! Let us know what you think by emailing Ally Shay Thomas (allythomas@pitt.edu). Acknowledgment The authors would like to acknowledge George Henly and Johnny Denbleyker for their contribution to this project. Supporting Information Filename Description emip12187-sup-0001-SuppMat.zip1.1 MB Additional Supporting Information, including the R code, may be found in the online version of this article at the publisher's website: http://onlinelibrary.wiley.com/journal/10.1111/(ISSN)1745-3992. Please note: The publisher is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article. Reference Stone, C. A. (2000). Monte Carlo-based null distribution for an alternative goodness-of-fit test statistic in IRT models. Journal of Educational Measurement, 37, 58– 75. Volume37, Issue1Special Issue: Strengthening the Connections between Classroom and Large-Scale AssessmentsSpring 2018Pages 3-3 ReferencesRelatedInformation

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.