Abstract

Although deep neural networks (DNNs) demonstrated its superior performance in computer vision, recent works proved that the vulnerability of deep neural task systems to carefully crafted human-imperceptible perturbations. We observe that the majority of adversarial attack methods for generating perturbations are based on rotation or modification of the input-diagnostic. Regardless of the characteristics completed by additional sources, such approaches yield comparable properties to enrich initial details. It motivates us to consider views about using various information apart from original inputs to deliver adversarial features. For such needs, we induce a simple yet adaptable adversarial attack strategy with a Feature-Guided Method (FGM) for crafting adversarial examples (AEs) in segmentation domain. FGM first induces the multi-source patterns that are apart from the original inputs. Then, producing feature diversities generated with the original data to deliver the perturbed components. Finally, FGM blends original inputs with created features in a defined norm constraint to form the adversarial examples. In such way, it preserves the original positive class-general characteristics and enriches the new positive class-specific diversities when performing adversarial attacks. Moreover, FGM employs the adaptive gradient-based strategy on such generated information, which lowers the risk of falling into the local optimum when searching for the decision boundary of the source and target models in latent space. We conduct detailed experiments to evaluate the performance of the proposed method compared to baselines on public segmentation models. The experimental results reveal better performance of FGMs in fooling source and target segmentation systems leading large margins over 5% on mIoU, mRec, and mAcc. We also deploy the adversarial training with proposed work and PGD on widely used models. Our approach improves the robust quality of adversarially trained models on FCN, PSPNet, and DeepLabv3 with various backbones by significant margins with over 13% improvement on mIoU and 12% on mRec, which indicates a better impact of deploying such mechanisms on robust deep neural segmentation models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.