Abstract

ABSTRACTThe Rorschach Performance Assessment System (R–PAS) aims to provide an evidence-based approach to administration, coding, and interpretation of the Rorschach Inkblot Method (RIM). R–PAS analyzes individualized communications given by respondents to each card to code a wide pool of possible variables. Due to the large number of possible codes that can be assigned to these responses, it is important to consider the concordance rates among different assessors. This study investigated interrater reliability for R–PAS protocols. Data were analyzed from a nonpatient convenience sample of 50 participants who were recruited through networking, local marketing, and advertising efforts from January 2013 through October 2014. Blind recoding was used and discrepancies between the initial and blind coders' ratings were analyzed for each variable with SPSS yielding percent agreement and intraclass correlation values. Data for Location, Space, Contents, Synthesis, Vague, Pairs, Form Quality, Populars, Determinants, and Cognitive and Thematic codes are presented. Rates of agreement for 1,168 responses were higher for more simplistic coding (e.g., Location), whereas agreement was lower for more complex codes (e.g., Cognitive and Thematic codes). Overall, concordance rates achieved good to excellent agreement. Results suggest R–PAS is an effective method with high interrater reliability supporting its empirical basis.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.