Abstract

This study combines language assessment processes and interlanguage analysis techniques to determine rater agreement and disagreement in assessing English article acquisition. Employing native English speaking and non-native English speaking raters, picture sequence narratives that were written by English as a Foreign Language (EFL) learners (n=97) were coded and scored for suppliance-in-obligatory context (SOC) and target-like utterance (TLU). Although the kappa statistic revealed a fair agreement between raters (0.17 – 0.33), content analysis methods revealed much higher agreement (88.29% - 94.07%). Furthermore, language background effects between the raters could not be substantiated however the results demonstrated a discernable disagreement pattern between them. Thus, the study recommends the inclusion of a foreign language teaching background as a factor for rater selection to minimize language background effects on rating language assessments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.