Abstract

Deep neural networks have been found vulnerable to adversarial attacks, thus raising potential concerns in security-sensitive contexts. To address this problem, recent research has investigated the adversarial robustness of deep neural networks from the architectural point of view. However, searching for architectures of deep neural networks is computationally expensive, particularly when coupled with an adversarial training process. To meet the above challenge, this paper proposes a bi-fidelity multiobjective neural architecture search approach. First, we formulate the neural architecture search (NAS) problem for enhancing the adversarial robustness of deep neural networks into a multiobjective optimization problem. Specifically, in addition to using low-fidelity estimations as the primary objectives, we leverage the output of a surrogate model trained with high-fidelity evaluations as an auxiliary objective. Secondly, we reduce the computational cost by combining three performance estimation methods, i.e., parameter sharing, low-fidelity evaluation, and surrogate-based predictor. The effectiveness of the proposed approach is confirmed by extensive experiments conducted on CIFAR-10, CIFAR-100 and SVHN datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call