- New
- Addendum
- 10.1007/s10676-025-09887-6
- Dec 24, 2025
- Ethics and Information Technology
- Ingvild Bode + 3 more
- Research Article
- 10.1007/s10676-025-09877-8
- Dec 22, 2025
- Ethics and Information Technology
- Erkan Basar + 3 more
- Research Article
- 10.1007/s10676-025-09871-0
- Dec 22, 2025
- Ethics and Information Technology
- Joel Krueger + 1 more
Abstract Generative AI chatbots like OpenAI's ChatGPT and Google's Gemini routinely make things up. They "hallucinate" historical events and figures, legal cases, academic papers, non-existent tech products and features, biographies, and news articles. Recently, some have argued that these hallucinations are better understood as bullshit. Chatbots produce streams of text that look truth-apt without concern for the truthfulness of what this text says. But can they also gossip? We argue that they can. After some definitions and scene-setting, we focus on a recent example to clarify what AI gossip looks like before considering some distinct harms — what we call "technosocial harms" — that follow from it.
- Research Article
- 10.1007/s10676-025-09885-8
- Dec 19, 2025
- Ethics and Information Technology
- Tae Wan Kim
- Research Article
- 10.1007/s10676-025-09879-6
- Nov 25, 2025
- Ethics and Information Technology
- Damin Yee
- Research Article
- 10.1007/s10676-025-09880-z
- Nov 25, 2025
- Ethics and Information Technology
- Elizabeth Stewart
- Research Article
- 10.1007/s10676-025-09881-y
- Nov 25, 2025
- Ethics and Information Technology
- Torben Swoboda + 6 more
- Research Article
- 10.1007/s10676-025-09884-9
- Nov 25, 2025
- Ethics and Information Technology
- Mitchell Roberts
- Research Article
- 10.1007/s10676-025-09878-7
- Nov 25, 2025
- Ethics and Information Technology
- Mathilda Marie Mulert
Abstract This paper draws a moral comparison between technologically facilitated rape simulations and rape simulations between humans. Specifically, it investigates a previously unexplored ethical puzzle: while many regard the use of ‘rapebots’—sex robots designed specifically for rape simulations—as morally impermissible, the practice of consensual non-consent (CNC), i.e. consensual rape role-play between human partners, appears less troubling. Yet, both are instances of rape simulations where all individuals capable of granting or withholding consent do consent. Are rapebot use and CNC, therefore, morally equivalent? I argue that they are not. Although rapebot use and CNC share similar content, they differ structurally: the former involves a solitary individual enacting fantasies unilaterally, while the latter occurs within a relational framework, foregrounding consent, negotiation, and respect. To explain why this structural difference matters morally, I introduce the mechanism of contextual negation by moral opposition . This mechanism posits that simulations of wrongdoing can be morally mitigated when their context explicitly affirms the values the simulated act would violate. While this can apply to CNC, it necessarily fails for rapebot use. Therefore, although some cases of CNC are morally permissible, the use of rapebots is always impermissible. This argument has broader implications for the ethics of technologically facilitated simulations.
- Research Article
- 10.1007/s10676-025-09882-x
- Nov 25, 2025
- Ethics and Information Technology
- Aurélie Halsband
Abstract Socially disruptive technologies can induce normative disorientation. This occurs as they disrupt established concepts that have traditionally provided normative guidance. A notable example of such technology-induced conceptual disruption is the advent of ventilator technology. Patients who lost brain stem activity and autonomous ventilation, yet remained alive through ventilator support, created a state of uncertainty: they were considered “dead” in terms of (autonomous) ventilation and brain activity, but “alive” in terms of cardiac function. This descriptive ambiguity led to normative disorientation, particularly among clinicians and patients’ relatives. In response, conceptual engineering and the introduction of the new concept of “brain death” have been identified as critical steps toward re-establishing normative clarity in the wake of socially disruptive technologies. However, the capacity of conceptual engineering to resolve such disruptions is often overstated. For engineered concepts to effectively restore descriptive and normative orientation, they must engage with underlying moral considerations, which constitute the foundation of normative guidance. Through the case study of “brain death,” this paper examines methodological challenges at the intersection of engineered concepts and normative frameworks. It applies the method of reflective equilibrium as a bridge between conceptual engineering and moral reasoning, thereby enriching the discourse on resolving technology-induced moral disruptions.