Many in the international development community have embraced the randomized controlled field experiment, akin to a biomedical clinical trial for social interventions, as the new “gold evidential standard” in program impact evaluation. In response, critics have called upon the method’s advocates to consider the moral dimensions of randomization, leading to a debate about the method’s ethics. My research intervenes in this debate by empirically investigating how researchers manage the perception of randomization in the field. Without the possibility of a placebo, researchers rhetorically and materially frame the experiment differently for the control and treatment groups. Three technologies allow for this differential framing: geographic separation, temporal delay, and public randomization ceremonies. Geographic separation is a “technology of opacity” designed to obscure unequal resource distribution by disentangling the intervention and research components of the experiment for the control group. The latter two are technologies of transparency designed to expose the element of randomization but downplay conditions that may affect participant buy in. All three technologies work to preclude collective definitions of fair resource allocation, yet they are not fully successful in preventing modes of confrontation and resistance that lie outside of the experiment’s framing.