Back to table of contents Previous article Next article LettersFull AccessSelf-Assessed Fidelity: Proceed With Caution: In ReplyJohn H. McGrew, Ph.D., Laura M. White, M.S., and Laura G. Stull, Ph.D.John H. McGrewSearch for more papers by this author, Ph.D., Laura M. WhiteSearch for more papers by this author, M.S., and Laura G. StullSearch for more papers by this author, Ph.D.Published Online:1 Apr 2013https://doi.org/10.1176/appi.ps.640419AboutSectionsPDF/EPUB ToolsAdd to favoritesDownload CitationsTrack Citations ShareShare onFacebookTwitterLinked InEmail In Reply: We appreciate the opportunity to respond to Bond’s thoughtful commentary on our brief report. It is important to contextualize this discussion. The evidence-based practice “movement” is sweeping public mental health. Proponents largely agree that these practices should be widely disseminated, that fidelity assessment is needed to ensure high-quality implementation, and that assessments are most valid when conducted by independent assessors. But there is a problem. The “need” for independent assessment far outstrips the capacity to provide it. This problem is exacerbated by increases in the numbers of interventions classified as evidence-based practices (now exceeding 100 [1]) and of sites implementing them and is particularly acute for onsite assessment, which requires up to three days of assessor time. Thus there is a need to identify alternate, less burdensome, yet valid assessment methods, such as self-report.Although we agree with Bond’s cautions concerning self-assessment for fidelity ratings as usually conducted, we believe that these concerns are most relevant for self-rated fidelity and that carefully collected self-reported data can be a valid and sole source for independent fidelity raters. Moreover, because all fidelity assessment methods use some self-reported data, differences are a matter of degree. Our approach assumes that the chief source of self-report invalidity is subjectivity in defining items and data needed to make ratings and that most people will report accurately when asked directly and clearly. To establish more objective procedures, we created a detailed protocol to gather data to score scale items, piloting and revising it over several years. For example, instead of asking, “Do you provide 24-hour coverage?” we ask, “What percentage of clients in crisis directly talk to staff after hours?” Instead of asking, “Are you involved with 95% of admissions?” we ask, “Describe team involvement with the past ten admissions.” In addition, we use independent raters to score the self-reported data and do not permit self-scoring of items. We believe that self-presentation biases are most problematic when self-scoring is used. In our study, for example, self-reported fidelity generally produced lower scores than phone fidelity.As detailed in our report, self-report can be reliable and valid when this approach is used. Moreover, in contrast to Bond’s generalizability concerns, preliminary results from an ongoing study support the validity of our self-report approach for teams naïve to fidelity assessment and for those with moderate experience. Also, we disagree with and are confused by Bond’s assertion that we endorse replacing onsite assessment with self-reported assessment. In fact, we proposed a stepped approach in which phone and self-reported assessment complement and supplement onsite assessment. Nevertheless, we agree that self-report should be reserved for evidence-based practices with well-articulated fidelity scales, that auditing procedures are needed to ensure accuracy, and that, to date, the advantage of self-reported over phone assessment appears minimal (2). We also agree that integrating self-report fidelity data into electronic records is a useful next step. However, the current state of the science is preliminary, and further research is needed to more carefully examine each of these important questions.References1 Chambless DL, Ollendick TH: Empirically supported psychological interventions: controversies and evidence. Annual Review of Psychology 52:685–716, 2001Crossref, Medline, Google Scholar2 McGrew JH, Stull LG, Rollins AL, et al.: A comparison of phone-based and on-site assessment of fidelity for assertive community treatment in Indiana. Psychiatric Services 62:670–674, 2011Link, Google Scholar FiguresReferencesCited byDetailsCited ByThe Relationship Between Suicidal Behaviors and Zero Suicide Organizational Best Practices in Outpatient Mental Health ClinicsDeborah M. Layman, M.A., Jamie Kammer, Ph.D., Emily Leckman-Westin, Ph.D., Mike Hogan, Ph.D., Julie Goldstein Grumet, Ph.D., Christa D. Labouliere, Ph.D., Barbara Stanley, Ph.D., Jay Carruthers, M.D., Molly Finnerty, M.D.18 March 2021 | Psychiatric Services, Vol. 72, No. 10Comparing fidelity monitoring methods in an evidence-based parenting intervention11 February 2021 | Journal of Children's Services, Vol. 16, No. 2Is There a Role for Fidelity Self-Assessment in the Individual Placement and Support Model of Supported Employment?Paul J. Margolies, Ph.D., Jennifer L. Humensky, Ph.D., I-Chin Chiang, M.S., Nancy H. Covell, Ph.D., Karen Broadway-Wilson, B.S., Raymond Gregory, B.S., Thomas C. Jewell, Ph.D., Gary ScannevinJr., M.P.S., C.P.R.P., Stephen Baker, B.A., B.S., Lisa B. Dixon, M.D., M.P.H.17 April 2017 | Psychiatric Services, Vol. 68, No. 9Administration and Policy in Mental Health and Mental Health Services Research, Vol. 43, No. 2 Volume 64Issue 4 April 2013Pages 394-394 Metrics PDF download History Published online 1 April 2013 Published in print 1 April 2013
Read full abstract