Abstract
Neural Architecture Search (NAS) has emerged as a promising tool in the field of AutoML for designing more accurate and efficient architectures. The majority of NAS works employ a weight-sharing technique to reduce the search cost by sharing the weights of a supernet, which is a composite of all architectures produced from the search space. Nonetheless, this method has a significant drawback in that negative interference may arise when candidate architectures share the same weights. This issue becomes even more severe in multi-task searches, where a supernet is shared across tasks. To address this problem, we propose a task-aware nested search for multiple tasks that generates task-specific search spaces and architectures using a search-in-search approach consisting of space-search and architecture-search phases. In the space-search phase, we discover an optimal subspace in a task-aware manner by utilizing the proposed search space generator based on the global search space. On top of each subspace, we search for a promising architecture in the architecture-search phase. This method can mitigate search interference by adaptively sharing weights of the supernet by the generated subspace. The experimental results on various vision benchmarks (CityScapes, NYUv2, and Tiny-Taskonomy) show that the proposed method achieves outstanding performance over existing methods in terms of task accuracy, model parameters, and latency.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have