Over the past decades, industries and governments have progressively been relying upon space data-centric and data-dependant systems. This led to the emergence of malicious activities, also known as cyber-threats, targeting such systems. To counter these threats, new technologies such as Artificial Intelligence (AI) have been implemented and deployed. Today, AI is highly capable of delivering fast, precise, and reliable command-and-control decision-making, as well as providing reliable vulnerability analysis using well-proven cutting-edge techniques, at least when applied to terrestrial applications. In fact, this might not yet be the case when used for space applications. AI can also play a transformative and important role in the future of space cybersecurity, and it poses questions on what to expect in the near-term future.Challenges and opportunities deriving from the adoption of AI-based solutions to achieve cybersecurity and later cyber defence objectives in both civil and military operations require rethinking of a new framework and new ethical requirements. In fact, most of these technologies are not designed to be used or to overcome challenges in space. Because of the highly contested and congested environment, as well as the highly interdisciplinary nature of threats to AI and Machine Learning (ML) technologies, including cybersecurity issues, a solid and open understanding of the technology itself is required, as well as an understanding of its multidimensional uses and approaches. This includes the definition of legal and technical frameworks, ethical dimensions and other concerns such as mission safety, national security, and technology development for future uses.The continuous endeavours to create a framework and regulate interdependent uses of combined technologies such as AI and cybersecurity to counter “new” threats require the investigation and development of “living concepts” to determine in advance the vulnerabilities of networks and AI.This paper defines a cybersecurity risk and vulnerability taxonomy to enable the future application of AI in the space security field. Moreover, it assesses to what extent a network digital twins’ simulation can still protect networks against relentless cyber-attacks in space against users and ground segments. Both concepts are applied to the case study of Earth Observation (EO) operations, which allows for conclusions to be drawn based on the business impact (reputational, environmental, and social) of a cyber malicious activity. Since AI technologies are developing on a daily basis, a regulatory framework is proposed using ethical and technical approaches for this technology and its use in space.
Read full abstract