In this issue of the journal we present two essays (Lavis et al. 2005, Greenhalgh and Russell 2005), two commentaries (Pope et al. 2005, Roger 2005) and responses from Lomas (Lomas 2005a) and Lavis (Lavis 2005) under the umbrella title Perspectives on Evidence, Synthesis and Decision-Making. Together with Lomas’s essay in our inaugural issue (Lomas 2005b), they provide sometimes complementary, sometimes competing, but invariably thoughtful perspectives on how (and to what extent) research and other evidence can be synthesized, presented and received to inform health policy and management decisions. At the risk of overkill, I now dip my oar in these already busy waters to consider the role of researchers in the decision-making process. In their commentary, Pope and colleagues make a useful distinction between reviews or syntheses that provide knowledge support and those that offer decision support. The decision support mode engages researchers with policy makers, managers, stakeholders and “experts” in a consideration of policy options. Beyond summarizing and clarifying the relevant evidence, what are the appropriate roles of researchers in this inevitably messy process? They might reasonably be expected to advocate for the research evidence, challenging policies that ignore or defy evidence pointing to a high probability of public benefit or harm. But should the line be drawn there or is it appropriate for researchers to join the policy fray as advocates for ideas, values or policies? Aren’t researchers’ ideas as good as anyone else’s – maybe better if they’ve been immersed in the policy area under consideration, sometimes for an entire career. On the other hand, researchers’ ideas and arguments reflect not just their expertise but their entire life experience and may contain varying degrees of personal or professional self-interest. Wearing the mantle of objectivity, researchers are well positioned to inject their values into policy discussions in the guise of evidence. By becoming advocates for ideas, researchers may debase their currency as purveyors of evidence. Clearly, these are questions that individual researchers, research synthesis teams and those who establish the rules of engagement for a decision support synthesis process will need to address and resolve. However, as Greenhalgh and Russell (2005) so forcefully point out, conventional evidence – synthesized or not – usually plays a marginal role in policy decision-making. When considered at all, (highly selected) evidence usually enters the process through intermediaries, such as in-house analysts, “experts,” advisers, lobbyists, interest groups, journalists or decision-makers themselves. At a recent workshop on conducting and commissioning syntheses for managers and policy makers,* Phil Davies, Deputy Director of the UK Government Social Research Unit, described the “evidence chain” of 55 top UK policy makers. Asked where they turned for guidance, they listed their sources in the following order: special advisers, “experts,” think tanks/opinion formers, lobbyists and pressure groups, professional associations, media and constituents/consumers/users. Academics were not even mentioned. In Davies’s words, academics are “at the level of plankton” in the evidence chain. The message seems clear. If researchers want to change the world, they need to become “experts.” Leaving aside the question of how this metamorphosis occurs – whether by serendipity or design – the dilemma about the proper role of the researcher, described above in the context of the decision support synthesis, emerges once again. However, to the extent that the researcher “expert” provides policy advice informally or behind closed doors, the restraining effect of transparency on the free expression of value-based argument masquerading as evidence is lost. Whether a researcher can achieve and maintain the status of expert while remaining simply a purveyor of, and advocate for, research evidence (and whether such “objectivity” is even possible) remains in doubt. Having described policy making as the “messy unfolding of collective action, achieved mostly through dialogue, argument, influence and conflict,” Greenhalgh and Russell conclude that all’s fair in policy making as long as the participants are aware of and explicit about how they and others play the game. Accepting this view, the least that can be expected of researchers who actively engage in the policy making process is explicitness about their interests, premises and values.
Read full abstract