Dilemma situations involving the choice of which human life to save in the case of unavoidable accidents are expected to arise only rarely in the context of autonomous vehicles (AVs). Nonetheless, the scientific community has devoted significant attention to finding appropriate and (socially) acceptable automated decisions in the event that AVs or drivers of AVs were indeed to face such situations. Awad and colleagues, in their now famous paper “The Moral Machine Experiment”, used a “multilingual online ‘serious game’ for collecting large-scale data on how citizens would want AVs to solve moral dilemmas in the context of unavoidable accidents.” Awad and colleagues undoubtedly collected an impressive and philosophically useful data set of armchair intuitions. However, we argue that applying their findings to the development of “global, socially acceptable principles for machine learning” would violate basic tenets of human rights law and fundamental principles of human dignity. To make its arguments, our paper cites principles of tort law, relevant case law, provisions from the Universal Declaration of Human Rights, and rules from the German Ethics Code for Autonomous and Connected Driving.