”Meaningful human control” is a term invented in the political and legal debate on autonomous weapons system, but it is nowadays also used in many other contexts. It is supposed to specify conditions under which an artificial system is under the right kind of control to avoid responsibility gaps: that is, situations in which no moral agent is responsible. Santoni de Sio and Van den Hoven have recently suggested a framework that can be used by system designers to operationalize this kind of control. It is the purpose of this paper to facilitate further operationalization of ”meaningful human control”.This paper consists of two parts. In the first part I resolve an ambiguity that plagues current operationalizations of MHC. One of the design conditions says that the system should track the reasons of the relevant agents. This condition is ambiguous between the kind of reasons involved. On one interpretation it says that a system should track motivating reasons, while it is concerned with normative reasons on the other. Current participants in the debate interpret the framework as being concerned with (something in the vicinity of) motivating reasons. I argue against this interpretation by showing that meaningful human control requires that a system tracks normative reasons. Moreover, I maintain that an operationalization of meaningful human control that fails to track the right kind of reasons is morally problematic.When this is properly understood, it can be shown that the framework of MHC is committed to the agent-relativity of reasons. More precisely, I argue in the second part of this paper that if the tracking condition of MHC plays an important role in responsibility attribution (as the proponents of the view maintain), then the framework is incompatible with first-order normative theories that hold that normative reasons are agent-neutral (such as many versions of consequentialism). In the final section I present three ways forward for the proponent of MHC as reason-responsiveness.