Abstract

AbstractAs the use of drones continues to increase, their capabilities pose a threat to airspace safety when they are misused. Deploying AI models for intercepting these unwanted drones becomes crucial. However, these AI models, such as deep learning models, often operate as “black boxes”, making it hard to trust their decision-making system. This also affects end-users’ confidence in these AI systems. In this paper, the explainability of deep reinforcement learning is investigated and a deep reinforcement learning (DRL) method, double deep Q-network with dueling architecture and prioritized experience replay is applied to train the AI models. To make the AI model decisions more transparent and to understand the reasoning behind the AI decisions for counter-drone systems, Shapley Additive Explanations (SHAP) method is implemented. After training the DRL agent, experience replay is visualized, and the absolute SHAP values are calculated to explain the key factors that influence the deep reinforcement learning agent’s choices. The integration of DRL with explainable AI methods such as SHAP demonstrates significant potential for the advancement of robust and efficient counter-drone systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.