Abstract

Differentiable architecture search (DARTS) provides a fast solution to Neural Architecture Search (NAS) by using gradient descent over a continuous search space. Despite its high search speed, DARTS suffers from several problems such as ambiguous topology selection, operation co-adaptation, incomplete NAS pipeline, and large memory consumption. To address these problems, we first introduce topology parameters into the search space to explicitly model the network topology, which ensures the searched network architecture is well-defined. Next, we propose two sampling strategies to sample independent child networks for training and evaluation, which solve the co-adaptation problem while making the NAS pipeline complete. Finally, we use hard pruning to avoid invalid computations, which greatly reduces memory consumption. The proposed Automatic Topology Learning for Differentiable Architecture Search (ATL-DAS) algorithm performs favorably against the state-of-the-art approaches on CIFAR10 and CIFAR100 with error rates of 2.49% and 16.8%, respectively. Moreover, the architectures searched perform well across various visual tasks, including ImageNet classification, COCO object detection, and Composition-1K image matting, highlighting the potential of ATL-DAS for real-world applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call