The field of cybersecurity has changed dramatically since the Cybersecurity Strategy for the Digital Decade was presented by the European Commission and the High Representative of the Union for Foreign Affairs and Security Policy in December 2020. The Cybersecurity Strategy highlights the potential of AI as a new technology, but also the need for cyber security of AI technology. Indeed, since the strategy was adopted, AI has shown that it has enormous potential for growth, but also several risks and vulnerabilities that this new technology brings. The paper analyses the shift and further development in the field of cybersecurity of digital products and services, AI itself as a technology, as well as products and services that will contain an AI component. In our opinion, the way to ensure that not only AI technology itself, but also products and services are cyber-secure, is to achieve a high level of standardisation of best practices, as there are many gaps in this area. The adoption of technical standards will fully form a path for conformity assessment and certification of not only AI systems but also AI-featured digital products and services. However, the current regulatory trend is to adopt a comprehensive legal regulation of AI even before such technical standards are fully developed and adopted. We consider this risky. Despite the well-intentioned effort to define and regulate AI, the purpose set forth in the AIA may not be achieved, as the requirements adopted in this way can very quickly become unnecessarily burdensome or even outdated due to increasing technological development. The proof of this is also the recent rise of large ML models, known as foundation models, which significantly changed the previous understanding of the creation of AI systems. It will be the technological development of AI, AI specific standardisation, and subsequent certification of digital products and services, which will govern future activities in building Europe's cyber resilience.
Read full abstract