Abstract

ABSTRACTReading comprehension is often treated as a multidimensional construct. In many reading tests, items are distributed over reading process categories to represent the subskills expected to constitute comprehension. This study explores (a) the extent to which specified subskills of reading comprehension tests are conceptually conceivable to teachers, who score and use national reading test results and (b) the extent to which teachers agree on how to locate and define item difficulty in terms of expected text comprehension. Eleven teachers of Swedish were asked to classify items from a national reading test in Sweden by process categories similar to the categories used in the PIRLS reading test. They were also asked to describe the type of comprehension necessary for solving the items. Findings of the study suggest that the reliability of item classification is limited and that teachers’ perception of item difficulty is diverse. Although the data set in the study is limited, the findings indicate, in line with recent validity theory, that the division of reading comprehension into subskills by cognitive process level will require further validity evidence and should be treated with caution. Implications for the interpretation of test scores and for test development are discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.