Abstract
Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and error-prone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estimation strategy.
Highlights
The success of deep learning in perceptual tasks is largely due to its automation of the feature engineering process: hierarchical feature extractors are learned in an end-to-end fashion from data rather than manually designed
We categorize methods for Neural Architecture Search (NAS) according to three dimensions: search space, search strategy, and performance estimation strategy:
The search space defines which neural architectures a NAS approach might discover in principle
Summary
The success of deep learning in perceptual tasks is largely due to its automation of the feature engineering process: hierarchical feature extractors are learned in an end-to-end fashion from data rather than manually designed. The original version of this chapter was revised: Primary affiliation of the Author “Thomas Elsken” has been updated now. T. Elsken ( ) Bosch Center for Artificial Intelligence, Robert Bosch GmbH, Renningen, Baden-Württemberg, Germany. Metzen Bosch Center for Artificial Intelligence, Robert Bosch GmbH, Renningen, Baden-Württemberg, Germany
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have