Abstract

Automatically designing neural architectures, i.e., NAS (Neural Architecture Search), is a promising path in machine learning. However, the main challenge for NAS algorithms is to reduce the considerable elapsed time to evaluate a proposed network. A recent strategy which attracted much attention is to use surrogate predictive models. The predictive models attempt to forecast the performance of a neural model ahead of training, exploiting only their architectural features. However, preparing the training data for predictive models is laborious and resource demanding. Thus, improving the model’s sample efficiency is of high value. For the best performance, the predictive model should be given a representative encoding of the network architecture. Still, the potential of a proper architecture encoding in pruning and filtering out the unwanted architectures is often overlooked in previous studies. Here, we discuss how to build a proper representation of network architecture that preserves explicit or implicit information inside the architecture. To perform the experiments, two standard NAS benchmarks, NASbench 101 and NASbench 201 are used. Extensive experiments on the mentioned spaces, demonstrate the effectiveness of the proposed method as compared with the state-of-the-art predictors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call