Abstract
A study was conducted to consider two issues: (a) whether differences might emerge in writing quality when students wrote examinations by hand or on a computer and (b) whether raters differed in their evaluation of essays written by hand, on a computer, or by hand and then transcribed to typed form before scoring. A total of 480 students from a large Midwestern university were randomly assigned into one of three essay groups: (a) those who composed their responses in a traditional bluebook, (b) those who wrote in a bluebook, then had their essays transcribed into a computer, and (c) those who wrote their essays on the computer. A one-way ANOVA revealed no statistically significant differences in ratings among the three groups [ F (2,475) = 1.21, ns]. The discussion centers on the need for testing programs to examine the relationship between assessment and prior writing experiences, student preferences for testing medium, and rater training regarding the possible impact of technology on scores.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.