Abstract

Deep learning has achieved great and broad breakthroughs in many real-world applications. In particular, the task of training the network parameters has been masterly handled by back-propagation learning. However, the pursuit on optimal network structures remains largely an art of trial and error. This prompts some urgency to explore an architecture engineering process, collectively known as Neural Architecture Search (NAS). In general, NAS is a design software system for automating the search of effective neural architecture. This article proposes an X-learning NAS (XNAS) to automatically train a network’s structure and parameters. Our theoretical footing is built upon the subspace and correlation analyses between the input layer, hidden layer, and output layer. The design strategy hinges upon the underlying principle that the network should be coerced to learn how to structurally improvethe input/output correlation successively (i.e., layer by layer). It embraces both Progressive NAS (PNAS) and Regressive NAS (RNAS). For unsupervised RNAS, Principal Component Analysis (PCA) is a classic tool for subspace analyses. By further incorporating teacher’s guidance, PCA can be extended to Regression Component Analysis (RCA) to facilitate supervised NAS design. This allows the machine to extract components most critical to the targeted learning objective. We shall further extend the subspace analysis from multi-layer perceptrons to convolutional neural networks, via introduction of Convolutional-PCA (CPCA) or, more simply, Deep-PCA (DPCA). The supervised variant of DPCA will be named Deep-RCA (DRCA). The subspace analyses allow us to compute optimal eigenvectors (respectively, eigen-filters) and principal components (respectively, eigen-channels) for optimal NAS design of multi-layer perceptrons (respectively, convolutional neural networks). Based on the theoretical analysis, an X-learning paradigm is developed to jointly learn the structure and parameters of learning models. The objective is to reduce the network complexity while retaining (and sometimes improving) the performance. With carefully pre-selected baseline models, X-learning has shown great successes in numerous classification-type and/or regression-type applications. We have applied X-learning to the ImageNet datasets for classification and DIV2K for image enhancements. By applying X-learning to two types of baseline models, MobileNet and ResNet, both the low-power and high-performance application categories can be supported. Our simulations confirm that X-learning is by and large very competitive relative to the state-of-the-art approaches.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.