Policy processes evolve in highly institutionalized environments, and policy effects are generally shaped by these environments. Policy outcomes could therefore be expected to be patterned and even predictable. In practice, however, prediction of policy outcomes is difficult. In part this lack of predictability relates to policy itself as many policies aim at institutional change: new norms, new rules, new patterns of behavior. However, policy is not the only force that is influencing such institutions. Moreover, institutional change can take place in the absence of a policy directed to it. This can happen as the outcome of interaction between many different societal actors reacting on external change or internal tensions. These societal actors find themselves in different institutional environments, which means that in explaining the outcomes of a policy process no single institutional framework can be presupposed, but rather a multiplicity of more or less connected and overlapping institutionalized patterns. Institutions ‘work’ if and when they appear as meaningful to actors. Here the term institution actually refers to two parallel phenomena: formal institutions and stabilized practice or action routines. In our view, these two phenomena come together in actors’ repertoires, defined as stabilized ways of thinking and acting. But repertoires do not fully determine thinking and acting, as there are always circumstances to which existing repertoires do not fully fit. Therefore, new sense making, and generation of new patterns of action are neede, which implies change of repertoires elicited by experience and/or reflection. We call this learning. The central question of our research, then, is how and when such learning takes place, especially by government and governmental agencies.In this paper we elaborate these preliminary notions with respect to the impact of policy evaluations by the Dutch Court of Audit. This focus combines a highly and explicitly institutionalized setting with a relatively explicit learning function. After a short introduction on the CoA we will elaborate our theoretical argument and illustrate it with examples from three case studies of the impact of CoA evaluations – State Museums, Lynx Helicopter, and Government Information Campaigns.