Abstract

Artificial intelligence, neural networks, speech and behavior recognition systems, drones, autonomous robotic systems - all of these and many other technologies are widely used by the military to create a new type of lethal weapon programmed to independently decide to use military force. According to experts, production of such weapons will be a revolution in military affairs, the same kind of revolution that the creation of nuclear weapons made back in the days.
 
 Adoption of fully autonomous combat systems raises a number of ethical and legal issues, the major of which is a destruction of a supposed enemy’s manpower by a robot without a human command. This article focuses on the legal aspects of creating autonomous combat systems, their legal status and the prospects of creating an international document prohibiting lethal robotic technologies.
 
 As the result of the study, the authors came to a conclusion that there is no direct legal restriction on the use of fully autonomous combat systems, however, the use of such weapons contradicts the doctrinal norms of international law. The authors also believe that a comprehensive ban on the development, use and distribution of robotic technologies is hardly possible in the foreseeable future. The most possible scenario for solving the problem at an international level is only a ban on the use of this type of military equipment directly during an operational activity of an armed conflict. At the same time, the authors consider it necessary to outline the acceptable areas of application of robotic technologies: medical and logistical support of military operations, military construction, the use of mine clearing robots and similar humanistically justified measures.

Highlights

  • Military leaders of any state always seek to minimize losses among the personnel of their troops

  • Technology has come close to creating fully autonomous deadly systems (ADS) - weapon systems of the future that will be able to work without significant human control by the aid of sensors and AI (Warham, 2016)

  • The third group includes companies involved in the development of AI technologies, such as facial recognition and speech recognition algorithms, visual perception, and creation of robotic technology, but for ethical reasons, employees of these companies refused to participate in the development of deadly robots

Read more

Summary

Introduction

Military leaders of any state always seek to minimize losses among the personnel of their troops. Digitalization comes to the aid of the military by changing methods and means of controlling combat operations and increases the efficiency, safety and effectiveness of combat units. From the point of view of humanitarian law, AI in the military field can solve humanistic problems, saving living soldiers from their actual participation in combat operations. Experts point to a lot of unsolved technical problems which today do not allow us to talk about the complete autonomy, speed, survivability, accuracy and safety of existing combat robots This is precisely the case when humanity needs to solve the problem before reaching the point of no return

Research Methodology
Main Part
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call