Abstract

Existing assessment methods of writing originality have been criticized for depending heavily on subjective scoring methods. This study attempted to investigate the use of topic analysis and semantic networks in assessing writing originality. Written material was collected from a Chinese language test administered to eighth-grade students. Two steps were performed: 1. Latent topics of essays in each writing task were identified, and essays on the same topic were treated as a refined reference group, within which an essay was to be evaluated; 2. A group of features was developed, including four categories, i.e., path distance, semantic differences, centrality, and similarity of the network drawn from each text response, which were used to quantify the differences among essays. The results show that writing originality scoring is not only related to the intrinsic characteristics of the text, but is also affected by the reference group in which it is to be evaluated. This study proves that computational linguistic features can be a predictor of originality in Chinese writing. Each feature type of the four categories can predict originality, although the effect varies across various topics. Furthermore, the feature analysis provided evidence and insights to human raters for originality scoring.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call