Abstract
The application of artificial intelligence (AI) technology for military use is growing fast. As a result, autonomous weapon systems have been able to erode humans’ decision-making power. Once such weapons have been deployed, humans will not be able to change or abort their targets. Although autonomous weapons have a significant decision-making power, currently they are not able to make ethical choices. This article focuses on the ethical implications of AI integration in the military decision-making process and how the characteristics of AI systems with machine learning (ML) capabilities might interact with human decision-making protocols. The authors suggest that in the future, such machines might be able to make ethical decisions that resemble those made by humans. A detailed and precise classification of AI systems, based on strict technical, ethical, and cultural parameters would be critical to identify which weapon is suitable and the most ethical for a given mission.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.