Aideen Dunne, Angela Scriven from Brunel University and Carole Furlong from NHS Harrow, assess the thorny relationship between evaluation, evidence and funding for health promotion interventions There is an emerging, problematic and synergistic association between the evidence of effectiveness of health promotion interventions and the availability of funding. This was recently highlighted in a UK Government directive calling for greater accountability for public health funding, resulting in Directors of Public Health having to decide which health improvement initiatives should be sustained and which should be cut.1 In future, from April 2013, councils will be given a ring-fenced budget, a share of around £5.2 billion based on 2012/13 funding, and will be able to choose how they spend it according to the public health needs of their population. More importantly, those who make the most improvements will be rewarded with a cash incentive. So, local authorities will be paid a new health premium for the progress they make against the public health indicators.2 Funding linked to evidence is now the norm. Such economic expediency applied to health promotion funding highlights a fundamental requirement. In essence, in times of financial rationalisation, the field of health promotion has to embrace an evidence informed approach, with the primary source of evidence being derived from evaluating practice, reporting and disseminating the results. Moreover, if this is to work, health promotion must be high on the political agenda, there must be effective evaluation systems in place to generate a body of evidence of effectiveness and finally there must be availability of funding. If one of the elements in this three way relationship is absent or weak the relationship fails and the other elements become significantly challenged. This presents a Catch 22 scenario. Funding is needed to pull evidence from practice through evaluation, before this evidence can be used to influence the political agenda and justify a continued funding stream.3 There are other obstacles to evaluating health promotion. There has long been a consensus that rigorous and appropriate evaluation makes a vital contribution to the development of practice and knowledge4 and reduces uncertainty in planning and commissioning of future health promotion interventions.5 Despite this consensus, evaluation in practice has been variable and ad hoc. This is the result of several conundrums: lack of universal agreement of what counts as evidence in health promotion; lack of guidance on what constitutes rigorous evaluation; and consequently, the use of inappropriate methods to derive evidence from practice.6, 7 A key problem is that the principles underpinning the practice of health promotion are complex. Interventions are often targeting the wider determinants of health and are developed specifically for the situation and communities within which they are implemented; they are not standardised procedures but often dynamic processes.6 Health promotion can be both a process and an outcome and evaluation of the success of a health promotion intervention has to capture multiple effects. Within the context of the complexity of health promotion practice, it becomes very difficult to prioritise or define what counts as evidence. There are those that argue that it is inappropriate to follow protocols from other fields of practice (such as the biomedical approach to evidence) and apply them directly to health promotion. The hierarchy of evidence used by other practitioners often does not lend itself to the complex nature and degree of uniqueness of some health promotion interventions.8 However, some form of parameters or guidance is required to ensure a degree of standardisation of approach and generalisability of evaluation findings across health promotion practice settings. Evaluations may also fail to capture the explosion of effects (expected and unexpected) that can occur at an individual, organizational or community level. …