Abstract

Most artificial intelligence technologies are dual-use. They are incorporated into both peaceful civilian applications and military weapons systems. Most of the existing codes of conduct and ethical principles on artificial intelligence address the former while largely ignoring the latter. But when these technologies are used to power systems specifically designed to cause harm, the question must be asked as to whether the ethics applied to military autonomous systems should also be taken into account for all artificial intelligence technologies susceptible of being used for those purposes. However, while a freeze in investigations is neither possible nor desirable, neither is the maintenance of the current status quo. Comparison between general-purpose ethical codes and military ones concludes that most ethical principles apply to human use of artificial intelligence systems as long as two characteristics are met: that the way algorithms work is understood and that humans retain enough control. In this way, human agency is fully preserved and moral responsibility is retained independently of the potential dual-use of artificial intelligence technology.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.