Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Export
Sort by: Relevance
  • New
  • Open Access Icon
  • Addendum
  • 10.1007/s10676-025-09887-6
Correction to: Ensuring the exercise of human agency in AI-based military systems: concerns across the lifecycle
  • Dec 24, 2025
  • Ethics and Information Technology
  • Ingvild Bode + 3 more

  • Research Article
  • 10.1007/s10676-025-09877-8
Autonomy-supporting chatbots: Endorsing volitional behavior change
  • Dec 22, 2025
  • Ethics and Information Technology
  • Erkan Basar + 3 more

  • Open Access Icon
  • Research Article
  • 10.1007/s10676-025-09871-0
AI gossip
  • Dec 22, 2025
  • Ethics and Information Technology
  • Joel Krueger + 1 more

Abstract Generative AI chatbots like OpenAI's ChatGPT and Google's Gemini routinely make things up. They "hallucinate" historical events and figures, legal cases, academic papers, non-existent tech products and features, biographies, and news articles. Recently, some have argued that these hallucinations are better understood as bullshit. Chatbots produce streams of text that look truth-apt without concern for the truthfulness of what this text says. But can they also gossip? We argue that they can. After some definitions and scene-setting, we focus on a recent example to clarify what AI gossip looks like before considering some distinct harms — what we call "technosocial harms" — that follow from it.

  • Open Access Icon
  • Research Article
  • 10.1007/s10676-025-09885-8
When work becomes a game: the moral costs of gamified labor
  • Dec 19, 2025
  • Ethics and Information Technology
  • Tae Wan Kim

  • Research Article
  • 10.1007/s10676-025-09879-6
Gamer’s de se imaginative resistance: a descriptive–philosophical resolution to the gamer’s dilemma
  • Nov 25, 2025
  • Ethics and Information Technology
  • Damin Yee

  • Research Article
  • 10.1007/s10676-025-09880-z
Gamification and the virtue of perspective
  • Nov 25, 2025
  • Ethics and Information Technology
  • Elizabeth Stewart

  • Research Article
  • 10.1007/s10676-025-09881-y
Examining popular arguments against AI existential risk: a philosophical analysis
  • Nov 25, 2025
  • Ethics and Information Technology
  • Torben Swoboda + 6 more

  • Research Article
  • 10.1007/s10676-025-09884-9
The Gamer’s Dilemma is not the Developer’s Dilemma
  • Nov 25, 2025
  • Ethics and Information Technology
  • Mitchell Roberts

  • Open Access Icon
  • Research Article
  • 10.1007/s10676-025-09878-7
Contextual negation by moral opposition: rethinking the ethics of (Rape) simulations
  • Nov 25, 2025
  • Ethics and Information Technology
  • Mathilda Marie Mulert

Abstract This paper draws a moral comparison between technologically facilitated rape simulations and rape simulations between humans. Specifically, it investigates a previously unexplored ethical puzzle: while many regard the use of ‘rapebots’—sex robots designed specifically for rape simulations—as morally impermissible, the practice of consensual non-consent (CNC), i.e. consensual rape role-play between human partners, appears less troubling. Yet, both are instances of rape simulations where all individuals capable of granting or withholding consent do consent. Are rapebot use and CNC, therefore, morally equivalent? I argue that they are not. Although rapebot use and CNC share similar content, they differ structurally: the former involves a solitary individual enacting fantasies unilaterally, while the latter occurs within a relational framework, foregrounding consent, negotiation, and respect. To explain why this structural difference matters morally, I introduce the mechanism of contextual negation by moral opposition . This mechanism posits that simulations of wrongdoing can be morally mitigated when their context explicitly affirms the values the simulated act would violate. While this can apply to CNC, it necessarily fails for rapebot use. Therefore, although some cases of CNC are morally permissible, the use of rapebots is always impermissible. This argument has broader implications for the ethics of technologically facilitated simulations.

  • Open Access Icon
  • Research Article
  • 10.1007/s10676-025-09882-x
Disruptive technologies, engineered concepts, and normative guidance
  • Nov 25, 2025
  • Ethics and Information Technology
  • Aurélie Halsband

Abstract Socially disruptive technologies can induce normative disorientation. This occurs as they disrupt established concepts that have traditionally provided normative guidance. A notable example of such technology-induced conceptual disruption is the advent of ventilator technology. Patients who lost brain stem activity and autonomous ventilation, yet remained alive through ventilator support, created a state of uncertainty: they were considered “dead” in terms of (autonomous) ventilation and brain activity, but “alive” in terms of cardiac function. This descriptive ambiguity led to normative disorientation, particularly among clinicians and patients’ relatives. In response, conceptual engineering and the introduction of the new concept of “brain death” have been identified as critical steps toward re-establishing normative clarity in the wake of socially disruptive technologies. However, the capacity of conceptual engineering to resolve such disruptions is often overstated. For engineered concepts to effectively restore descriptive and normative orientation, they must engage with underlying moral considerations, which constitute the foundation of normative guidance. Through the case study of “brain death,” this paper examines methodological challenges at the intersection of engineered concepts and normative frameworks. It applies the method of reflective equilibrium as a bridge between conceptual engineering and moral reasoning, thereby enriching the discourse on resolving technology-induced moral disruptions.