Abstract

Recent advances in deep networks have achieved appealing performances on object recognition tasks, due to their robust feature learning abilities. Besides the generated deep features, other network characteristics, e.g., inter-layer weight matrix and their back-propagated derivatives, may behave complementarily in feature learning in terms of generalization and robustness performances. However, characteristics adaptivity to different databases is not well studied. Meanwhile, current algorithms are apt to explore the most salient features for better generalization performance, while the hierarchically-salient features that may be beneficial for network robustness are not fully explored. Thus, we propose an attention module to make network characteristics adaptive to different training tasks, which can be further combined with the dynamic dropout algorithm to suppress salient neurons to explore more SndMS (Second Most Salient) features for robust recognition. The proposed algorithm has two main merits. First, the complementarity of network characteristics is taken into account when conducting training on different databases; Second, with the exploration of more SndMS neurons for hierarchically-salient feature representation learning, the network robustness against adversarial perturbations or fine-grained differences can be enhanced. The extensive experiments on seven public databases show that the proposed attention-based dropout largely improves the network robustness, without compromising the generalization performance, compared with related variants and state-of-the-art (SOTA) algorithms. Algorithm codes are available at https://github.com/lingjivoo/ACAD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call