Abstract

The discipline of program evaluation demands careful examination, measurement, and consideration of alternative explanations for results. Focused evaluation requires distance, and “distance breeds suspicion” (Green & Lewis, 1986). Furthermore, an evaluation report is rarely viewed as a neutral document (Berk & Rossi, 1990). Thus, program evaluators, even when they work closely with program planners and implementers, often function in isolation-separated by focus, by a critical stance, and by the political climate generated by real or perceived judgmental assessments (Rossi & Freeman, 1993). This paper offers a description of the events and activities that brought independent evaluators together to address common problems and issues with the goal of problem solving and improving site specific evaluative activities. The usual pattern of isolated work was broken when the evaluators of twelve independent AIDS-related programs came together and formed a working group which supported an on-going dialogue and exchange of ideas among various projects across the United States. The efforts of this working group served to significantly influence the final evaluation strategies adopted by the individual programs. It provided opportunities for the exchange of information and encouraged cooperation in the development of data collection formats, instruments, and assessments across twelve programs. The focus of discussion here is on the development and accomplishments of a collaborative effort undertaken by a working group of evaluators of separate AIDS education and training programs funded by the National Institute of Mental Health

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call