Decision modeling and cost effectiveness analysis have become important tools to inform clinical decision making and policy development, playing roles in policy decisions for human immunodeficiency virus and breast and colon cancer screening, as well as clinical decisions in the prevention of cardiac disease, for example. The establishment and use of best practices for model development, validation, and reporting are key steps in ensuring that model users have information they can trust. In this issue of Medical Decision Making, Eddy and others describe practices for model transparency and validation recently developed by the International Society for Pharmacoeconomics and Outcomes Research (ISPOR)–Society for Medical Decision Making (SMDM) Modeling Good Research Practices Task Force. The authors recommend that modelers 1) provide a freely accessible nontechnical description of the model (VII.1), including a description of the external validation methods and results (VII.1 and VII.6) and verification (internal validation) methods (VII.4); 2) make available (openly or under licensing agreements) sufficiently technical documentation such that expert readers can evaluate and potentially reproduce the model (VII.2); and 3) assess face (VII.3), internal (verification) (VII.4), and external validity (VII.5–VII.11). Previous guidelines and recommendations have discussed the importance of transparency and validation, urging developers to clearly report model structure, data, equations, and assumptions, as well as methods to validate or check model consistency. However, these new ISPOR-SMDM recommendations set a new standard by encouraging the sharing of technical details, including code. The guidelines arrive at a time of heightened interest in transparency in research communities. Earlier this year, in a report investigating difficulties in evaluating and reproducing key translational omics findings that were used for treatment choice in later clinical trials (and that were later retracted), the Institute of Medicine issued a call for greater transparency and sharing of data and code within translational omics and in the broader research community. Eddy and colleagues appropriately note that concerns about intellectual property, model misuse, and costs may limit sharing of model details. The authors also suggest that if a model is accurate (validated), transparency may be of lesser importance: ‘‘Ultimately, what matters most is whether a model accurately predicts what occurs in reality.’’ However, accuracy does not diminish the need for transparency, and there may be ways to address the identified challenges. First, an accurate model still needs to be transparent. What degree of accuracy is sufficient to eliminate the need for transparency? Which are the exact data against which predictions should be measured? Both are matters of judgment. And, if prospective validation (predictive validation) is the highest standard for validation (as implied by these good practice recommendations), this information may only be available years after publication, limiting its usefulness at the time of decision making. Finally, too narrow a focus on predictive accuracy may unnecessarily discount findings from a model that cannot be thoroughly validated in this manner. The primary purpose of decision From VA Palo Alto Health Care System, Palo Alto, California, and Stanford School of Medicine, Stanford, California.