Abstract

Machine learning (ML) techniques have become pervasive across a range of different applications, and are now widely used in areas as disparate as recidivism prediction, consumer credit-risk analysis, and insurance pricing. Likewise, in the physical world, ML models are critical components in autonomous agents such as robotic surgeons and self-driving cars. Among the many ethical dimensions that arise in the use of ML technology in such applications, analyzing morally permissible actions is both immediate and profound. For example, there is the potential for learned algorithms to become biased against certain groups. More generally, in so much that the decisions of ML models impact society, both virtually (e.g., denying a loan) and physically (e.g., driving into a pedestrian), notions of accountability, blame and responsibility need to be carefully considered. In this article, we advocate for a two-pronged approach ethical decision-making enabled using rich models of autonomous agency: on the one hand, we need to draw on philosophical notions of such as beliefs, causes, effects and intentions, and look to formalise them, as attempted by the knowledge representation community, but on the other, from a computational perspective, such theories need to also address the problems of tractable reasoning and (probabilistic) knowledge acquisition. As a concrete instance of this tradeoff, we report on a few preliminary results that apply (propositional) tractable probabilistic models to problems in fair ML and automated reasoning of moral principles. Such models are compilation targets for certain types of knowledge representation languages, and can effectively reason in service some computational tasks. They can also be learned from data. Concretely, current evidence suggests that they are attractive structures for jointly addressing three fundamental challenges: reasoning about possible worlds + tractable computation + knowledge acquisition. Thus, these seems like a good starting point for modelling reasoning robots as part of the larger ecosystem where accountability and responsibility is understood more broadly.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call