Abstract
Gait identification based on Deep Learning (DL) techniques has recently emerged as biometric technology for surveillance. We leveraged the vulnerabilities and decision-making abilities of the DL model in gait-based autonomous surveillance systems when attackers have no access to underlying model gradients/structures using a patch-based black-box adversarial attack with Reinforcement Learning (RL). These automated surveillance systems are secured, blocking the attacker’s access. Therefore, the attack can be conducted in an RL framework where the agent’s goal is determining the optimal image location, causing the model to perform incorrectly when perturbed with random pixels. Furthermore, the proposed adversarial attack presents encouraging results (maximum success rate = 77.59%). Researchers should explore system resilience scenarios (e.g., when attackers have no system access) before using these models in surveillance applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.