Abstract
The proper principles for the regulation of autonomous weapons were studied here, some of which have already been inserted in International Humanitarian Law (IHL), and others are still merely theoretical. The differentiation between civilians and non-civilians, the solution of liability blanks and proportionality are fundamental principles for the regulation of the warlike use of artificial intelligence (AI), but the significant human control of the warlike AI must be added to them. Through the hypothetical-deductive procedure, with a qualitative approach and bibliographic review, it was concluded that the realization of the differentiation criterion, value-sensitive design, the elimination of accountability gaps, significant human control and IHL must support the regulation of the use of autonomous weapon systems – however, the differentiation between civilians and non-civilians and proportionality are not yet technologically possible, which makes compliance with IHL still dependent on significant human control; and the opacity of warlike AI algorithms would make legal accountability for its use difficult.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.