Abstract

Artificial Intelligence (AI) – the ability of a computer or a robot to perform tasks commonly associated with intelligent beings – represents both a new challenge and a significant opportunity for the future of space activities. Indeed, increasing connectivity and symbiotic interactions between humans and intelligent machines raise significant questions for the rule of law and contemporary ethics, including the applicable rules relating to liability in case of damage arising from advanced AI. AI also encompasses a series of complex issues that cut across social, economic, public policy, technological, legal, ethical and national security boundaries. The development of AI-based autonomous systems is equally relevant in the context of military operations and on the battlefield, particularly with the use of drones and, more controversially, Lethal Autonomous Weapons Systems. After outlining the legal and ethical challenges posed by this technology, this article focuses on AI systems for space operations that give rise to questions about how these interact with existing legal concepts and technical standards. This article also describes how space law is relevant and applicable to AI use in the context of space missions. The specific attributes of autonomous space systems may also require further consideration as to the traditional application of the authorization of space missions, the international responsibility of States and the liability regime in case of damage. As a precursor to more detailed research in the future, this article seeks to introduce some of the more significant legal issues that AI-driven automated processes might pose for space operations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call