Equipped with sophisticated, AI-based driver assistance systems, passenger cars are becoming increasingly intelligent. It seems that in a matter of a few years, fully autonomous vehicles will operate without any driver intervention. In this context, researchers are addressing the question of how fully automated vehicles should make decisions in critical situations. Should they spare the driver, children jumping out into the road or elderly people standing on the sidewalk? Projects such as MIT’s Moral Machine are investigating the preferences of people from different nations and cultures for ethical decision algorithms. Evaluations of these automated decisions and how the may impact consumer perception and well-being are still scarce. In our experimental study, participants experienced a simulator-based driving situation in a fully autonomous car, after which they were confronted with alternative scenarios requiring automated action by the car in a critical situation. We measured the emotional status and well-being of our test-persons (N=33) in those critical situations using facial expression recognition (FER), electroencephalography (EEG), and standardized questions. The results show that there are detectable differences between the scenarios with respect to emotions as well as subjective well-being and behavioral intentions in the test group’s responses to the questionnaire. Regarding FER and EEG, no statistically significant differences could be shown due to the small subsample.