Abstract

The introduction of new forms of artificial intelligence (AI) military weaponry specifically autonomous weapon systems (AWS) can select and engage targets without human intervention therefore the application of lethal AWS incorporation with AI has revolutionized armed conflicts.The main concern regarding the military application of AI is the use of force should be maintained by only human soldiers. There is an urgent need to reinterpret the threshold for triggering an international armed conflict because AI technology unintentionally causes war during border control or surveillance operation. This article predominantly focuses on fully autonomous weapon systems which refer to human agents being removed from certain force applications. The main research questions in this article are first, is it possible that an AWS might, alone, spark an international armed conflict, thus bringing international humanitarian law into force? Second, can the criteria of organisation and intensity that give rise to non-international armed conflicts be met when AWS are controlled by non-state armed actors? This study will examine the research questions by focusing on the main areas of debate in the field of international law on AWS, specifically the compatibility of AI with the principles of humanitarian law, the determination of international responsibility, and ethical problems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.