Abstract

Rating scale instruments have been widely used in learning environment research for many decades. Arguments for their sustained use require provision of evidence commensurate with contemporary validity theory. The multiple-type conception of validity (e.g. content, criterion and construct), that persisted until the 1980s was subsumed into a unified view by Messick. He re-conceptualised types of validity as aspects of evidence for an overall judgment about construct validity. A validity argument relies on multiple forms of evidence. For example, the content, substantive, structural, generalisability aspect, external, and consequential aspects of validity evidence. The theoretical framework for the current study comprised these aspects of validity evidence with the addition of interpretability. The utility of this framework as a tool for examining validity issues in rating scale development and application was tested. An investigation into student engagement in classroom learning was examined to identify and assess aspects of validity evidence. The engagement investigation utilised a researcher-completed rating scale instrument comprising eleven items and a six-point scoring model. The Rasch Rating Scale model was used for scaling of data from 195 Western Australian secondary school students. Examples of most aspects of validity evidence were found, particularly in the statistical estimations and graphical displays generated by the Rasch model analysis. These are explained in relation to the unified theory of validity. The study is significant. It exemplifies contemporary validity theory in conjunction with modern measurement theory. It will be of interest to learning environment researchers using or considering using rating scale instruments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call