Abstract

Learning robust features against adversarial attacks is a challenging task that requires highly complex models, especially on aerial images, because they are subject to environmental and adversarial changes. Embedding hypersphere normalization, along with adversarial settings, causes performance degradation and enables the feature to overlap. To address this, in this article, we propose a dynamic hypersphere embedding scale (DHS) method that remaps the normalized features to a relative scale to learn robust features. The proposed method combines the benefits of hypersphere embedding without scarifying softmax advantages. The DHS aggregates the normalized features and the non-normalized ones. It uses a hypersphere embedding to enforce maximum-margin to the features that yield shorter magnitude and utilizes a dynamic scale to avoid features overlapping in the case of adversarial attacks. We validate the DHS's effectiveness by embedding the adversarial training attacks such as Projected Gradient Descent (PGD), CW, and DeepFool. Empirical experiments revealed that the DHS improves the model performance by 12% when using the PGD attack, with less computation than legacy hypersphere models. Another set of experiments showed that the DHS does not obfuscate the gradient.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call