Abstract

Autonomous vehicles promise to be safer than manually driven counterpart. Yet they are still to become completely safe. Collisions are practically unavoidable. So autonomous vehicles need to be algorithmically modeled for how they ought to respond to scenarios where collisions are highly probable or inevitable. The accident-scenarios autonomous vehicles might face have been frequently linked to dilemmas associated with the trolley problem. In this review article, we critically examine this ubiquitous analogy. We observe three basic concerns in which the ethics behind accident algorithms for autonomous vehicles and the philosophy of trolley problem differ: a. The algorithmic design follows a stakeholder model or an agency model. b. Legal framework and moral responsibilities. c. Modelling low-latency decision-making in the face of uncertainty and risk. By reviewing these three areas of dis-analogy, we identify that Trolley Problem is an abstract problem that is of low relevance to the real-life situation of a crash scenario of autonomous vehicle. Every crash scenario is unique to the people it affects, both passenger and pedestrian. Care Ethics seems to be more suitable approach for such phenomenon as its result adapts to context of the real-life.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.