Abstract

ABSTRACTUndergraduate students completed a Reading Comprehension test in which each of a number of passages served as the basis for four to eight questions, half framed in multiple‐choice format and half requiring that the examinee produce an appropriate answer. Questions within a passage were balanced according to the kind of information processing they were thought to require: Explicit questions dealt with information given explicitly in the passage; Inference questions required inference based on material presented; and Application and Evaluation questions required that the examinee go beyond what was presented to consider applications of ideas presented or evaluation of the logic or the style of the passage. It was hypothesized that questions of the first type would show no systematic differences associated with the format of the question, while those of the second might, and those of the third would, draw on somewhat different cognitive abilities when examinees were required to produce rather than merely to recognize an appropriate answer.Correlational and factor analyses provided only minimal indications of systematic differences associated with the response format. Estimated true correlations across formats averaged .91, a relation of the same magnitude as that found when multiple‐choice Reading Comprehension items differing in their information processing requirements are contrasted with one another. Weak evidence for a factor loaded by scores based on performance on multiple‐choice but not free‐response items was found, but only one score had a loading as great as .3 on this factor.Relations of test scores with other cognitive measures also failed to show systematic differences associated with format. Regardless of differences in item type and response format, all scores based on the Reading Comprehension test showed fairly substantial relations with tests of Reasoning and Vocabulary and with the ACT English achievement test, moderate relations with the ACT mathematics achievement score, and near‐zero relations with a test of Divergent Thinking.Despite this evidence for similarity in performance across formats, however, there is also evidence of systematic differences for individuals. The subjects by format interaction in the analysis of variance, while not of great strength, was significant at the five percent level of confidence. This effect indicates that some students showed larger differences in performance between the two response formats than did others.It was speculated that two changes in test items might be required to elicit the differences in performance associated with response format that have been observed in some other testing situations. Items may need to be structured so that they draw less on information given in or derivable from the test materials and more from information whose relevance is not specified for the examinee; and types of questions with which examinees are not so well practiced may need to be employed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.