Abstract
Deep reinforcement learning is widely used in many fields. However, recent research has found vulnerabilities in agents trained by reinforcement learning algorithms and raised concerns about the deployment of agents in the real world. Due to the addition of imperceptible adversarial examples to the agent’s observed state, the policy network is tricked into acting in a suboptimal way. To solve this issue, we introduce an approach, named reinforcement learning under local constraints (RLUC), aimed at bolstering the robustness of agents when countering potent adversarial attacks. Considering the sensitivity of the policy network to the observation state, when adversarial samples are those perturbed by the adversaries injected into the observation states, the policy network generates large fluctuations in the last connection layer. In order to minimize the divergence in the distribution of policy outputs, our method attaches constraints at each layer of the policy network, allowing the agent to stick to the origin action under adversarial attacks. RLUC endeavor to minimize the total variance of the output of actor constructed by neural network layers between polluted state and clean state. RLUC is evaluated and analyzed in different aspects compared to previous excellent works. RLUC underwent evaluation on Mujoco benchmarks and was subjected to testing against six formidable adversarial attacks. Experiment results show that the proposed method outperforms existing state-of-the-art methods and significantly improves the robustness and smoothness of the agent’s policy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.