Abstract


 
 
 This article examines if bottom-up artificial moral agents are capable of making genuine moral judgements, specifically in light of David Hume’s is-ought problem. The latter underscores the notion that evaluative assertions could never be derived from purely factual propositions. Bottom-up technologies, on the other hand, are those designed via evolutionary, developmental, or learning techniques. In this paper, the nature of these systems is looked into with the aim of preliminarily assessing if there are good reasons to suspect that, on the foundational level, their moral reasoning capabilities are prone to the no-ought-from-is thesis. The main hypothesis of the present work is that, by conceptually analysing the notion of bottom-up artificial moral agents, it would be revealed that their seeming moral judgements do not have proper philosophical basis. For one, the said kinds of artifacts arrive at the understanding of ethically-relevant ideas by means of culling data or facts from the environment. Thus, in relation to the is-ought problem, it may be argued that, even if bottom-up systems seem prima facie capable of generating apparent moral judgments, such are actually absent of good moral grounding, if not empty of any ethical value.
 
 

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call