Abstract
As large language models (LLMs) become more deeply integrated into various sectors, understanding how they make moral judgments has become crucial, particularly in the realm of autonomous driving. This study utilized the Moral Machine framework to investigate the ethical decision-making tendencies of prominent LLMs, including GPT-3.5, GPT-4, PaLM 2, and Llama 2, comparing their responses to human preferences. While LLMs' and humans' preferences such as prioritizing humans over pets and favoring saving more lives are broadly aligned, PaLM 2 and Llama 2, especially, evidence distinct deviations. Additionally, despite the qualitative similarities between the LLM and human preferences, there are significant quantitative disparities, suggesting that LLMs might lean toward more uncompromising decisions, compared to the milder inclinations of humans. These insights elucidate the ethical frameworks of LLMs and their potential implications for autonomous driving.
Full Text
Topics from this Paper
Large Language Models
Significant Disparities
Decision-making Tendencies
Human Preferences
Moral Judgments
+ Show 2 more
Create a personalized feed of these topics
Get StartedSimilar Papers
INFORMS Journal on Data Science
May 19, 2023
Aug 7, 2023
Research Ethics
Jun 15, 2023
Research Integrity and Peer Review
May 18, 2023
Clinical orthopaedics and related research
May 23, 2023
Journal of Esthetic and Restorative Dentistry
Apr 5, 2023
Plastic and Reconstructive Surgery - Global Open
Apr 1, 2023
Journal of Moral Education
Sep 16, 2023
Health Care Science
Jul 24, 2023
Journal of the American College of Radiology : JACR
Jul 7, 2023
International Journal for Research in Applied Science and Engineering Technology
Jul 31, 2023
Big Data & Society
Jul 1, 2021
International Journal of Oral Science
Jul 28, 2023
Singapore Medical Journal
Jan 1, 2023