This meta-analysis synthesizes research on interrater reliability of Criteria-Based Content Analysis (CBCA). CBCA is an important component of Statement Validity Assessment (SVA), a forensic procedure used in many countries to evaluate whether statements (e.g., of sexual abuse) are based on experienced or fabricated events. CBCA contains 19 verbal content criteria, which are frequently adapted for research on detecting deception. A total of k = 82 hypothesis tests revealed acceptable interrater reliabilities for most CBCA criteria, as measured with various indices (except Cohen's kappa). However, results were largely heterogeneous, necessitating moderator analyses. Blocking analyses and meta-regression analyses on Pearson's r resulted in significant moderators for research paradigm, intensity of rater training, type of rating scale used, and the frequency of occurrence (base rates) for some CBCA criteria. The use of CBCA summary scores is discouraged. Implications for research vs. field settings, for future research and for forensic practice in the United States and Europe are discussed. (PsycINFO Database Record
Read full abstract