Abstract

Despite the fact that relatively little attention has been paid to the assessment of second language (L2) pragmatics competence in the fields of L2 pragmatics and language testing (Roever, 2011), there has been a growing body of research on pragmatics assessment since Hudson, Detmer, and Brown (1992, 1995) developed a framework for assessing cross-cultural pragmatics. Hudson et al. developed six prototype pragmatics test instruments: (a) a multiple-choice discourse completion test (DCT), (b) an open-ended written DCT, (c) a language lab DCT, (d) a role play, (e) a self-assessment task, and (f) a role-play self-assessment. Each different test measures written and spoken aspects of pragmatics competence assessed by raters or in a self-assessment format. Hudson et al. also investigated the reliability and validity of their instruments using six analytical rating criteria for raters: (a) ability to use the correct speech act, (b) formulaic expression, (c) amount of speech in a given situation, (d) formality level, (e) directness level, and (f) overall politeness level. These rating criteria reflect diverse factors within pragmatics competence and each rater used the criteria to assess each test item. Since then, researchers following Hudson et al.’s framework in various L2 contexts (e.g., Ahn, 2005; Brown, 2001; Hudson, 2001; Yamashita, 1996; Yoshitake, 1997; Youn, 2008), or developing their own test instruments (e.g., Grabowski, 2009; Liu, 2007; Roever, 2005, 2006; Tada, 2005), have conducted an increasing amount of research on L2 pragmatics assessment. Studies that employed Hudson et al.’s framework in various L2 contexts have consistently reported reasonably high reliability and validity measures for all of the test types except the multiple-choice.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call