The three-paper series on guidance for evidence-informed decisions about health systems, published in PLoS Medicine, and produced by members of the World Health Organization (WHO) Task Force on Developing Health Systems Guidance, offers important contributions to improving the quality of evidence-informed decision-making in health systems [1–3]. We recognize the importance of engendering greater structure and systematization in processes that collate and evaluate evidence, and bring it to bear on policy. However, there are significant challenges in doing this for policies related to health systems, and we caution against the adoption of rigid approaches to the development of guidance and to the application of evidence to policy. In recognizing the growing international consensus on the importance of strengthening health systems, particularly in lowand middle-income-countries (LMICs), the first paper argues that better guidance is needed to provide evidenceinformed decisions about interventions in health systems, analogous to the methods that have been used to develop clinical guidelines, and facilitate their implementation [1]. The second paper seeks to identify a series of practical processes and tools for policy development at international and national levels, and for developing guidance at the national level [2]. Many of the same authors have developed the Supporting Policy Relevant Trials (SUPPORT) tools [6] that provide a basis for a very systematic approach to organizing questions about health systems problems and decisions influenced by evidence (Tables 1–3 in [2]). The third paper attempts to adapt guidelines used in clinical evidence-based medicine to understand the quality of health systems evidence using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) criteria [3,7]. In the first paper, Xavier Bosch-Capblanch et al. identify multiple uses of guidance on health systems from a review of national policies and plans in LMICs, but for some of the guidance identified— such as the operational guidelines for procurement, human resource management, or planning and budgeting procedures— one wonders whether research evidence is critical. The authors offer little guidance as to where health systems guidance is most needed. Given the fairly resource-intensive approach proposed to producing health systems guidance a clear sense of priorities is required, and a recognition that sometimes adherence to ‘‘best practice’’ may be sufficient. The papers pay relatively little attention to the well known ‘‘policy-implementation gap’’, and sometimes appear to presume that getting policy right is sufficient. The WHO essential medicines program has encountered significant success in promoting the widespread adoption of an extensive set of guidelines (at global, national, and local levels) related to the development of evidence-based policies, institutions, and procedures [4]. Over 150 countries have adopted essential medicines lists based on thoughtful and evidence-informed guidance [5]. Yet, despite this significant success, policy implementation of essential medicines programs continues to face enormous challenges, with widespread irrational medicines use and a growing threat of counterfeit medicines. In practice, policy development is rarely a one-time event, but is rather a continuous process. National-level policy decisions may provide the overarching framework for change, but commonly the details of policy change are worked out on the ground through implementation processes and reflected in more informal expressions of policy such as ministerial memos and training manuals. While the second paper [2] in the series, by John Lavis et al., portrays a relatively clean process of interaction between global guidance and national guidance and policy, in practice there are likely to be multiple policy iterations as problems and issues merge. Accordingly, while establishing structured processes to promote evidence use, we must not lose sight of the importance of building networks of researchers and policy makers to facilitate ongoing dynamic interaction. The authors of the third paper, Simon Lewin et al., note that systematic evidence is needed to address questions of feasibility and acceptability of interventions, as well as effectiveness, though much of the discussion in the paper addresses evidence regarding ‘‘what can work’’, a question for which GRADE criteria function well. But policy makers may be more interested in questions such as ‘‘what can work in our (non-research) environment?’’, ‘‘how can we make an intervention work well?’’, or ‘‘how can we overcome obstacles to implementation in our situation?’’ They are also likely to be more concerned about the broader type of unintended consequences of an intervention (e.g., the political ramifications) [8], the type of results that are often not well examined by typical research on ‘‘what can work’’. These