Abstract

Abstract: Ethics is inherently a multiagent concern. However, analysis on AI ethics nowadays is dominated by work on individual agents: (1) however Associate in Nursing autonomous golem or automotive could hurt or (differentially) profit folks in theoretical things (the questionable tramcar problems) and (2) how a machine learning formula could turn out biased choices or recommendations. The social group framework is basically omitted. To develop new foundations for ethics in AI, we tend to adopt a sociotechnical stance during which agents (as technical entities) facilitate autonomous social entities or principals (people and organizations). This multiagent conception of a sociotechnical system (STS) captures however moral considerations arise within the mutual interactions of multiple stakeholders. These foundations would modify USA to understand ethical STSs that incorporate social and technical controls to respect stated moral postures of the agents within the STSs. The visualized foundations need new thinking, on 2 broad themes, on how to realize (1) Associate in STS that reflects its stakeholders’ values and (2) individual agents that perform effectively in such Associate in STS.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call