Robert McGrath and colleagues (McGrath, Mitchell, Kim,& Hough, 2010) squarely took aim at a sacred cow inpersonality assessment when they published a highly pro-vocative meta-analysis in Psychological Bulletin that castdoubt on the “validity” of validity scales. Using strict selec-tion criteria, which dramatically winnowed down the num-ber of possibly relevant studies from over 4,000 to 40, theyfound surprisingly scant evidence supporting the utility ofresponse bias indicators. They concluded that despite closeto a century of research devoted to response bias, “the caseremains open whether bias indicators are of sufficient utilityto justify their use in applied settings to detect misrepresen-tation” (p. 466). In this and a subsequent article (McGrath,Kim, & Hough, 2011), they issued a challenge for newresearch that places response bias indicators on a more solidfooting.Alarming as these findings may have sounded to psy-chologists who routinely rely on validity scales in their dailyforensic practice, no one called for an immediate moratori-um on their use in the courtroom. Rohling et al. (2011)promptly published a critical response focusing on allegedinadequacies in the methodology of McGrath et al. (2010)and the soundness of their data analysis, particularly withrespect to neuropsychological assessment. They argued thatMcGrath et al. had overlooked at least five studies showingthat response bias indicators moderated predictive validityand had made inappropriately sweeping conclusions bytreating positive and negative response bias indicators asthough evidence concerning the former was relevant to thelatter. It is also important to note that the final sample ofMcGrath et al. (2010) included only one forensic case (i.e.,Edens & Ruiz, 2006)