Abstract

A survey of American psychology training clinics was undertaken to determine the Scope, nature, impact, and problems of evaluation research conducted in these settings. Survey questions explored evaluation of both clinical training and client treatment. Seventy-four usable responses (56%) were received, of which 68% reported current quantitative evaluation of, client treatment and 61% reported current quantitative evaluation of clinical training. A wide variety of specific outcome measures were used with varying frequency. Most evaluation activities were exclusively supported by internal financing, with the clinic director the most likely collector of evaluation data and the clinic staff the most likely recipients of evaluation findings. Major obstacle to evaluation included resource constraints, staff resistance, pragmatic difficulties, and technological limitations. Forty-eight percent of the directors of clinics conducting treatment evaluation believed evaluation had a significant influence on policy, whereas 42% of those conducting training evaluation reported such influence. Several correlates of policy impact were also identified. Further plans to conduct evaluation were widespread, though not universal. The need for better measures,faculty resistance to evaluation, ways of improving policy impact, and the importance of increased communication across training sites are discussed. Although federal support for evaluation of mental health services is currently in doubt, the long-term trend is toward increasing emphasis on the use of scientifically valid data in the administrative and clinical decision-making process (Aaronson & Wilner, 1983; Perloff & Perloff, 1977; Stevenson & Longabaugh, 1980). The technologies for providing such data are growing ever more sophisticated (Coursey, 1977; Perloff, Perloff, & Sussna, 1976; Stahler & Tash, 1982). Professional psychologists have several critical roles in the evaluation enterprise: as evaluation researchers, as providers of data, and as responders to evaluation results. Recent calls for more policy-relevant studies evaluating psychotherapy (e.g., Greenspan & Sharfstein, 1981; Kazdin & Wilson, 1978; Kiesler, 1980; Strupp, 1981) in ordinary treatment settings underscore the need for clinicians to possess the skills, and to accept the challenge, of evaluation research. These concerns point to an urgent training need: Applied psychology graduate programs should prepare students to support and participate in evaluation activities associated with the provision of clinical services. Both didactic training and hands-on participation are needed. The present investigation was designed to examine one important context in which clinical students may be exposed to evaluation research. We were interested in the extent and nature of evaluation activity occurring in psychology training clinics around the country. There were several reasons for our focus on training clinics. First, we believe that a central influence on future orientation toward evaluation is actual participation in evaluation research while working with respected models of clinical skills (cf. Harway & Serafica, 1977; Norcross & Wogan, 1982; Shemberg & Keeley, 1979; Sobell & Key, 1982). We agree with Kiesler (1981) who argued,

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call