In the process of conducting a meta-analysis regarding the effectiveness of field instruction in social work (Holden, Barker, Rosenberg, Kuppens, & Ferrel, 2011), our research group encountered an issue that has not been widely discussed in the social work literature. This issue has interesting implications for the use of ([kappa]) in many areas of research beyond systematic reviews and meta-analyses. Our struggle with this issue resulted in the writing of this research note. Early in the meta-analytic process, after searching 25 databases, we compiled a set of 1,680 records to review (Holden et al., 2011). Two team members independently reviewed this set of records, each making an exclude/include decision. Next, the level of was determined (even though a vote for inclusion of a record by either team member resulted in the record going into the final set for full review).The results of this process are displayed in Table 1. Although the percentage of was 95.5 (94.8 + 0.7), the associated [kappa] value was .22. Recall that a value for [kappa] one when perfect occurs, zero when observed equals chance agreement, and less than zero when observed is less than chance agreement. Therefore, [kappa] is interpreted as in excess of chance agreement (Orme & Gillespie, 1986, p. 166). How should one interpret these findings that at first glance appear discrepant? That is, how does a researcher resolve the gratifying finding of in nearly 96% of the coding decisions when that finding is accompanied by a low [kappa] value? In the course of answering this question, we wondered if this issue was well documented among social work scholars. A review of the indexes of a convenience sample of textbooks covering research methods, statistics, or both in social work failed to uncover a single discussion of this topic (Blanksby & Barber, 2006; Bloom, Fischer, & Orme, 2009; Drake & Jonson-Reid, 2008; Grinnell & Unrau, 2008; Montcalm & Royse, 2002; Royse, Thyer, & Padgett, 2010; Rubin, 2007, 2008; Rubin & Babbie, 2007; Unrau, Gabor, & Grinnell, 2007; Weinbach & Grinnell, 2004; York, 2009). A key word search of Social Work Abstracts using kappa produced 16 hits. Review of these titles revealed that all focused on the use of (or weighted kappa) in specific assessments of interrater or intermeasure reliability, and only one briefly mentioned the issues discussed here (Fisher, Evans, Muller, & Lombard, 2004). This article was published by a group of authors that did not appear to contain social workers and appeared in a non-social work journal (Journal Citation Reports, 2008). Although we did not intend for this to be a comprehensive search, given Social Work Abstracts' performance deficits (for example, Holden, Barker, Covert-Vail, Rosenberg, & Cohen, 2008, 2009; Shek, 2008), two additional strategies were used. First, a scan of records obtained from a search of Social Services Abstracts did not produce any additional articles related to our focus. In addition, we approached both traditional (that is, other scholars working in the area) and electronic invisible colleges (that is, professional Listservs) to further explore the level of familiarity with this interpretation issue (Cooper, 2010). A mailing to the MSWEDUCATION Listserv resulted in one response that noted awareness among experienced qualitative researchers of alternative measures to (personal communication with D. Fitch, assistant professor, School of Social Work, University of Missouri, Columbia, March 13, 2010). Queries directed to a group of research colleagues also lent support to the conclusion that this issue is not well known in social work. Two of these contacts (personal communication with S. Kirk, professor, Social Welfare, University of California, Berkeley, January 26, 2010; personal communication with J. …