Abstract

The purpose of this study is to determine if corpus analysis tools can identify linguistic features within writing placement samples that are significantly different between levels within a higher education language program. Although commercial tests are widely used for placement decisions, local performance assessments have become more common compliments that better adhere to communicative language teaching. At the university where this study was conducted, raters use a holistic rubric to score students’ responses to one academic topic. The scoring process is fast when rates agree but too time consuming when raters search for information to resolve disagreements. Writing placement samples from 123 former students’ essays at an Intensive English Program were used to compile a corpus. I divided the writing samples into four folders that correspond with the program levels and analyzed the folders using syntactic, lexical, and essay complexity analyzers. I utilized the robustness of the ANOVA to account for assumption violations. Data that violated the normality assumption were first analyzed using the Kruskal-Wallis Test. Those variables showing significant differences between levels were then analyzed using ANOVA and the appropriate post-hoc tests. Results show significant between group differences with lexical and word types and tokens, complex nominal, verb phrases, and ideas. I discuss the interpretation of these variables as well as show how administrators used this information to revise the rubric from Version I to Version II. Broader implications from this study are the use of corpus research tools to operationalize performance for the purposes of model building.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call