Abstract

It is gratifying to receive comments from such a distinguished panel of evaluation practitioners. I appreciate that the panelists were given limited space in which to respond, and I imagine that they might have wanted to say more. Nonetheless, it is remarkable that none of them rose to the occasion to outline a positive ethical stance. Rather, their remarks largely amount to reasons why current practices are logistically, methodologically, or socially desirable-or even inevitable. Their basic message seems to be that things aren't so bad. Both Burt Barnow and Howard Rolston assert that the ethical dark cloud of artificial scarcity has the silver lining of recruiting a potentially more diverse subject pool. Peter Schochet, in a passing acknowledgment of the difficult issues of distributive justice raised by the targeting of social experiments, assures us that this shortcoming isn't his fault: it is the way things are done. And Rolston questions the relevance of my enterprise, because I did not sufficiently demonstrate that the Belmont principles apply to social program evaluation. But I did not assert that the Belmont principles are necessarily the right ones. Rather my point-which none of the panelists disputes-is that there are currently no publicly accepted principles. If Belmont doesn't fit, what does? Rolston's first point may be the most useful in getting beyond this impasse, toward a more positive framing of the issues. He notes that there is a fundamental difference between social programs and medical interventions with respect to the social externalities that they generate. Welfare benefits, for example, are designed to reduce material hardship (good for individual families), but not at the expense of parental responsibility (good for society). Good social programs achieve a favorable balance between these individual and social goods. He notes that there is no analogous imperative for balance in the medical arena, since improved individual health often generates positive social externalities. This is true, but it begs (or perhaps raises?) the questions that I thought I had posed: Given that research on social programs may impose hardship on individuals in the name of future benefits to members of their social class and/or society at-large, by what ethical principles should researchers circumscribe those hardships? Should the line protecting subjects be drawn after public debate, or are private understandings enough? Assuming that a public consensus can be reached, should there be institutional mechanisms to ensure that principles are applied, or is the current honor system adequate? Again I would emphasize that the Belmont principles, as interpreted by the medical research community, are not necessarily best-suited to govern federal social program evaluations. There might be some other ethical principles or guidelines that the community would embrace. For instance, constructive, positive principles or practices might include something like these:

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call