Evaluation is a vital enterprise for social programs. Well-done evaluations are helpful for program administrators, program workers, clients, policymakers, and the public at large in whose name the program is conducted. Each stakeholding group can learn some things that are primarily of interest to itself, but the most significant contributions of a competent evaluation are important to all stakeholders. Good evaluations provide useful information. They illuminate program strengths and weaknesses. They can lead to improvements in individual programs. In the long run, they also lead to a more effective and efficient mix of programs offered to clients. This promise, however, is not yet being met to any great extent because evaluations are not being used to any great extent by decision makers (Braverman & Campbell, cited in Royce, 1992; Leviton & Hughes, 1981). Evaluations are only of use if they provide adequate information, but many are based on designs that are not properly rigorous. A study of 2,231 outcome evaluations published during a 10-year period indicated that over two-thirds used no comparison group and that more than a third were posttest-only case studies (Goldstein, Surber, & Wilner, 1984). Although in some cases nonrigorous evaluation designs are as rigorous as is necessary, authors of such reports cannot, in good conscience, make strong claims for the results they report. Thus, we should not wonder that decision makers often ignore these products. Some of the blame for this situation can be laid at the feet of evaluators who do not know how to do a better job, but this part of the problem is probably fading. Most program administrators have access to competent evaluators through faculty of local universities or independent consultants. The question is whether these evaluators are used. There are several reasons that program administrators do not necessarily want rigorous evaluations. Chief among these is the natural reticence to seek out what might be bad news and to gather information potentially detrimental to their agency or career. A second reason is that more-rigorous evaluations are more troublesome, more expensive, and more time-consuming than less-rigorous designs. A final reason is that many program administrators do not have training in program evaluation and do not know how to commission a rigorous evaluation effort. As is commonly known, a large percentage of social program managers have a clinical background. Students with a clinical focus often have trouble seeing the relevance of research or evaluation classes and do not retain such information, even if presented in class. This article offers a few rules of evaluation practice wisdom that have been honed through years of consulting and teaching courses about evaluation. Although presented in a lighthearted way, these rules, if followed, would result in more rigorous and useful evaluations. None of the ideas is totally original, but as a group they provide a handy checklist for practitioners when developing or assessing evaluation plans that cross their desks. The Big Picture Rule Evaluations provide information to address real questions. If you don't have any questions, don't conduct an evaluation. The Big Picture Corollary The more your money is involved, the more questions you have. Almost every text on evaluation emphasizes that one of the main reasons for evaluations is to help administrators in program planning. Royce (1992) stated it well: Program evaluation is applied research used as part of the managerial process. Administrators can use the information from a proper program evaluation to make the program better, especially if formative evaluation questions are asked as well as summative ones. Still, if the administrator of a program has no plans to use the information gathered by an evaluation, there is no point in expending the resources. A lack of curiosity on the part of the program manager concerning how well the program operates and whether it makes a difference in the lives of clients is all too common. …