Abstract
Given the constraints of remote communication and the unpredictability of the environment, autonomous planetary landing mechanisms are expected to achieve the high criteria of autonomy and provide optimal trajectory in future space exploration missions. As the results, applying Deep Reinforcement Learning (DRL) techniques into autonomous landing has produced encouraging findings. Due to the black-box nature of deep learning algorithms, one of the main concerns regarding the robustness of DRL is its vulnerability to adversarial attacks. This constraint prevents the transfer of DRL-based autonomous landing schemes from simulation to real-world applications. In this article, we explore how the DRL-based autonomous landing will be impacted by adversarial attacks and how to protect the system effectively and efficiently. To achieve this, a Long Short Term Memory (LSTM) based adversarial attack detector is been proposed. The proposed method adopts the explainability measurement of the target DRL scheme and flag the detection of adversarial attacks when acting. The proposed method is built and tested on 3D digital terrain model of Candidate Landing Site for 2020 Mission in Jezero Crater to simulate the landing scenario on the Mars. The experimental results demonstrate the proposed methodology can effectively detect adversarial attacks when acting on DRL agent with a high confidence in detection accuracy.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have