Abstract

AbstractMost acceptability judgments reported in the syntactic literature are obtained by linguists being their own informants. For well-represented languages like English, this method of data collection is best described as a process of community agreement, given that linguists typically discuss their judgments with colleagues. However, the process itself is comparably opaque, and the reliability of its output has been questioned. Recent studies looking into this criticism have shown that judgments reported in the literature for English can be replicated in quantitative experiments to a near-perfect degree. However, the focus of those studies has been on testing sentence pairs. We argue that replication of only contrasts is not sufficient, because theory building necessarily includes comparison across pairs and across papers. Thus, we test items at large, i. e. independent of counterparts. We created a corpus of grammaticality judgments on sequences of American English reported in articles published inLinguistic Inquiryand then collected experimental ratings for a random subset of them. Overall, expert ratings and experimental ratings converge to a good degree, but there are numerous instances in which ratings do not converge. Based on this, we argue that for theory-critical data, the process of community agreement should be accompanied by quantitative methods whenever possible.

Highlights

  • Linguists being their own informants is one of the main means of data collection in syntactic theory

  • While several researchers have voiced their concerns about the reliability of it (Wasow and Arnold 2005; Featherston 2007; Gibson and Fedorenko 2010; Gibson and Fedorenko 2013; Gibson et al 2013), others have defended researcher introspection, citing that it has proven itself to be reliable for most purposes and that there are no reasons to assume that quantitative methods give better results (Phillips and Lasnik 2003; Bornkessel-Schlesewsky and Schlesewsky 2007; Grewendorf 2007; Phillips 2010; Sprouse and Almeida 2012; Sprouse and Almeida 2013; Sprouse et al 2013)

  • The present paper argues that in the case of American English and other wellrepresented languages, researcher introspection is best thought of as what we call a process of community agreement

Read more

Summary

Introduction

Linguists being their own informants is one of the main means of data collection in syntactic theory. This practice is commonly referred to as researcher introspection. In the wake of Schütze’s seminal work (1996), there has been a debate about. Häussler the adequacy and reliability of researcher introspection (among others Bard et al 1996; Schütze 1996; Edelman and Christiansen 2003; den Dikken et al 2007; Culicover and Jackendoff 2010). While several researchers have voiced their concerns about the reliability of it (Wasow and Arnold 2005; Featherston 2007; Gibson and Fedorenko 2010; Gibson and Fedorenko 2013; Gibson et al 2013), others have defended researcher introspection, citing that it has proven itself to be reliable for most purposes and that there are no reasons to assume that quantitative methods give better results (Phillips and Lasnik 2003; Bornkessel-Schlesewsky and Schlesewsky 2007; Grewendorf 2007; Phillips 2010; Sprouse and Almeida 2012; Sprouse and Almeida 2013; Sprouse et al 2013)

Objectives
Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call