Abstract

Zero-shot Neural Architecture Search has garnered attention due to its training-free nature and rapid search speed. However, existing zero-shot estimators commonly suffer from low consistency, which hampers their practicality. In this work, we theoretically analyze that network generalization and convergence are highly correlated with Sweet Gradient of Parameter, i.e., the number of parameters whose gradient absolute values are within a certain interval. Empirical results indicate that Sweet Gradient of Parameter brings a higher consistency than the overall number of parameters. Additionally, we demonstrate a positive correlation between the network depth and the proportion of parameters with sweet gradients in each layer. Based on the analysis, we propose a training-free method to find the Sweet Gradient interval and obtain an estimator, named Sweetimator. Furthermore, Sweet Gradient can be an effective and general approach to promote the consistency of zero-shot estimators. Experiments show that Sweetimator and Sweet-enhanced estimators have significant consistency improvement in multiple benchmarks. Our method achieves state-of-the-art performance with 256x speedup in NAS-Bench-201 and maintains high competitiveness in DARTS, MobileNet, and Transformer search spaces. The source code is available at https://github.com/xingxing-123/SweetGradient.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call