Abstract

The aerospace community and industry have recently shown increasing interest towards the use of Artificial Intelligence (AI) for space applications, partially driven by the recent development of the NewSpace economy. AI has already come into extensive use in spacecraft operations, for example to support efficient operations of satellite constellations and system health management. However, since most critical infrastructures rely on space systems, the use of new technologies, such as AI algorithms or increased system autonomy on-board, introduces further vulnerabilities on the system level. As a matter of fact, AI cyber security is becoming an important aspect to ensure space safety and operational security. Apart from identifying new vulnerabilities that AI systems may introduce to space assets, this paper seeks for safety guidelines and technical standardisations developed for terrestrial applications that can be applicable to AI systems in space. Existing policy guidance for cybersecurity and AI, especially for the European context, is discussed. To promote the safe use of AI technologies in space this work underlines the urgency for policymakers, governance, and technical institutions to initiate or further support the development of a suitable framework to address the new cyber-vulnerabilities introduced by AI technologies when applied to space systems. The paper suggests a regulatory approach based on technical standardisation in the field of AI, which is built upon a multidisciplinary research of AI applications in non-space sectors where the level of autonomy is more advanced.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call