Abstract

AbstractPruning-based neural architecture search (NAS) methods are effective approaches in finding network architectures that have high performance with low complexity. However, current methods only yield a single final architecture instead of an approximation Pareto set, which is typically the desirable result of solving multi-objective problems. Furthermore, the network performance evaluation in NAS involves the computationally expensive network training process, and the search cost thus considerably increases because numerous architectures are evaluated during an NAS run. Using computational resource efficiently, therefore, is an essential problem that needs to be considered. Recent studies have attempted to address this resource issue by replacing the network accuracy metric in NAS optimization objectives with so-called training-free performance metrics, which can be calculated without requiring any training epoch. In this paper, we propose a training-free multi-objective pruning-based neural architecture search (TF-MOPNAS) framework that produces competitive trade-off fronts for multi-objective NAS with a trivial cost by using the Synaptic Flow metric. We test our proposed method on multi-objective NAS problems created on a wide range of well-known NAS benchmarks, i.e., NAS-Bench-101, NAS-Bench-1shot1, and NAS-Bench-201. Experimental results indicate that our method can figure out trade-off fronts that have the equivalent quality to the ones found by state-of-the-art NAS methods but with much less computation resource. The code is available at: https://github.com/ELO-Lab/TF-MOPNAS.KeywordsPruning-based neural architecture searchAutoMLMulti-objective optimizationTraining-free indicators

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call