Abstract

ABSTRACTGenerating Explanations (GE) is a computer‐delivered item type that presents a situation and asks the examinee to pose as many plausible reasons for it as possible. Previous research suggests that GE measures a divergent thinking ability largely independent of the convergent skills tapped by the GRE General Test. This study was conducted to determine if prior GE validity results generalized to the GRE candidate population, how population groups performed, what effects partial‐credit modeling might have for validity, and what problems were associated with operational administration. Validity results showed that earlier findings were generally supported: GE was found to be reliable but only marginally related to the General Test and to make significant (but small) independent contributions to the explanation of relevant criteria. With respect to population groups, GE produced smaller gender and ethnic group differences than did the General Test and showed the same relations to outside criteria across groups, suggesting it was measuring similar skills in each population. Attempts to model GE responses on a partial‐credit IRT scale succeeded but produced no improvement in relations with external criteria over those obtained by summing raw item scores. Finally, interviews conducted with examinees to detect potential delivery problems suggested that the directions needed to be shortened.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call