Abstract
The space industry is rapidly growing at present and is not limited to the traditional players like The National Aeronautics and Space Administration (NASA) and European Space Agency (ESA), and it has spread to medium and small commercial organisations as well. The advancement in both hardware and software technologies is leading to the industry players' expansion. In parallel, the adoption of Artificial Intelligence (AI) and Machine Learning (ML) have been surging in the space industry. There are diverse applications in the space sectors that ML may be applied, such as assisting astronauts, debris removal in the orbit etc. However, several studies have shown that ML specifically deep learning methods are vulnerable to adversarial attacks. However, vulnerabilities are studied mainly on the classification tasks, only a few studies have been carried out on identifying the adversarial attacks on the regression models such as pose estimation. This paper, undertaken as part of the UK FAIR-SPACE Hub, aims to identify adversarial actions against learning methods and their impact in the space domain where pose estimation of a space object is taken as an exemplar. The importance of pose estimation and the consequences of undesired activity while computing pose estimation can be expensive. For example, estimating a wrong pose during the docking of a spacecraft can result in a collision and damage the assets. In this work, we first analyse the impact of adversarial attacks in the space for estimating pose using various adversarial machine learning techniques. We then present the possible implications of existing and emerging defensive strategies for building resilient machine learning for pose estimation. The results show that the optimised based attack method performs well compared to the Iterative Fast Gradient Simple Method (IT-FGSM) and Generative Adversarial Network (GAN) based AdvGAN methods to generate adversarial examples. In terms of defensive strategies, ML model is vulnerable and still work needs to be done to make them robust against adversarial attacks. The results of this work showcase potential attacks on current and future ML based space missions and the necessity to make them resilient. We believe incorporating resilient methods in the design phase may save time, economy, and potential embarrassment caused by mission failure.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.