Abstract

The role of ethical judgement in autonomous control systems is gaining increasing prominence. In particular, there is increasing concern about `killer robots', drones that can kill on their own, and intelligent autonomous vehicles such as driverless cars. Recent incidents involving autonomous vehicles in which humans have been killed or injured have raised questions about whether such vehicles can have an ethical dimension to their behavior so that they know when it is right or wrong to take over control from a human driver or hand control back, no matter how advanced their embedded artificial intelligence and sensor technology. This paper describes a fuzzy control approach to machine ethics that shows how it is possible for an ethics architecture to be part of a control system to calculate when taking over from a human driver is morally justified. One major advantage of the approach is that such an ethical reasoning architecture can generate its own data for learning moral rules and thereby reduce the possibility of picking up human biases and prejudices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call