INTRODUCTION. Artificial intelligence (AI) can significantly strengthen cybersecurity systems of States, as well as serve an additional technical means for malicious actions in the so-called cyberspace. Recognizing this, States have started investing in defensive and offensive autonomous cyber capabilities to protect their interests and deter potential adversaries; this has further fuelled the increase in inter-State cyber operations as nations seek to gain a competitive edge in the digital realm. This paper focuses not only on the problems of applying existing norms of international law to situations of malicious use of AI by States, but also on the process of these norms’ interpretation by different actors and, through this interpretation, crystallization of a common (or at least converging) understanding of their applicability. More specifically, this paper examines the path to understanding of how the norms on the use of force apply to AI-enabled cyber operations.MATERIALS AND METHODS. The present study is based on the works of both Russian and foreign specialists on the law of international information security, as well as analysis of documents and materials of groups of governmental experts under the auspices of the United Nations and the positions of States. In addition to general scientific methods (analysis, synthesis, induction and deduction), the theory of transnational legal process is applied to this study, which considers ongoing discussions of relevant issues on various platforms, and, more generally, the interaction of various actors regarding the formation of a pool of rules for responsible use of AI by States through interaction, interpretation and internalization of the interpreted legal ideas and practices into the domestic legal systems.RESEARCH RESULTS. The international efforts to develop a universal set of rules for responsible State behaviour in cyberspace have enjoyed rather modest success. The analysed history of cyber-related debate suggests that the AI-focused discussion for the foreseeable future will progress outside the area of developing a comprehensive treaty framework. Instead, the legal landscape of AI applications will appear to emerge from soft law instruments and private sector initiatives, which would lead to fragmentation of interpretation and State practice.DISCUSSION AND CONCLUSIONS. The complication of inter-State cyber operations by AI technology raises additional questions about the application of international law, in particular its norms on the use of force, to AI-powered cyber incidents. The deployment of lethal autonomous weapons systems and commitment of AI-powered cyber operations could potentially lead to another ‒ this time, AI ‒ arms race, as nations seek to develop and acquire these systems to maintain strategic parity. This could destabilize global security and increase the risk of conflict escalation. This and other political and ethical considerations argue in favor of limiting the discretion of States in the use of AI. However, to date, the incentives for NATO States, China and Russia to agree on an international binding instrument limiting the use of AI for malicious purposes appear illusory. One could argue that corresponding discussions on AI will probably take place outside of the development of an international treaty, given the historical debate surrounding the application of international law in cyberspace and the development of norms governing responsible States behaviour in the use of information and communication technologies. Further analysis of this development, thus, will require examining how transnational norms, such as those emerging from soft law instruments, customary practices, and private sector initiatives, will shape the international legal landscape of the AI application.
Read full abstract