Abstract

AbstractNeural architecture search (NAS) techniques can discover outstanding network architecture while saving tremendous labor from human experts. Recent advancements further reduce the computational overhead to an affordable level. However, it is still cumbersome to deploy NAS in real-world applications due to the fussy procedures and the supervised learning paradigm. In this work, we propose the self-supervised and weight-preserving neural architecture search (SSWP-NAS) as an extension of the current NAS framework to allow the self-supervision and retain the concomitant weights discovered during the search stage. As such, we merge the process of architecture search and weight pre-training, and simplify the workflow of NAS to a one-stage and proxy-free procedure. The searched architectures can achieve state-of-the-art accuracy on CIFAR-10, CIFAR-100, and ImageNet datasets without using manual labels. Moreover, experiments demonstrate that using the concomitant weights as initialization consistently outperforms the random initialization and a separate weight pre-training process by a clear margin under semi-supervised learning scenarios. Codes are available at https://github.com/LzVv123456/SSWP-NAS.KeywordsSelf-Supervised learningNeural architecture search

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call