Abstract

Architecture search is the process of automatically learning the neural model or cell structure that best suits the given task. Recently, this approach has shown promising performance improvements (on language modeling and image classification) with reasonable training speed, using a weight sharing strategy called Efficient Neural Architecture Search (ENAS). In our work, we first introduce a novel continual architecture search (CAS) approach, so as to continually evolve the model parameters during the sequential training of several tasks, without losing performance on previously learned tasks (via block-sparsity and orthogonality constraints), thus enabling life-long learning. Next, we explore a multi-task architecture search (MAS) approach over ENAS for finding a unified, single cell structure that performs well across multiple tasks (via joint controller rewards), and hence allows more generalizable transfer of the cell structure knowledge to an unseen new task. We empirically show the effectiveness of our sequential continual learning and parallel multi-task learning based architecture search approaches on diverse sentence-pair classification tasks (GLUE) and multimodal-generation based video captioning tasks. Further, we present several ablations and analyses on the learned cell structures.

Highlights

  • Architecture search enables automatic ways of finding the best model architecture and cell structures for the given task or dataset, as opposed to the traditional approach of manually choosing or tuning among different architecture choices, which introduces human inductive bias or is nonscalable

  • For multi-task cell learning, we show that the cell structure learned by jointly training on the Question-Answering Natural Language Inference (QNLI) and Winograd Natural Language Inference (WNLI) tasks, performs significantly better on the Recognizing Textual Entailment (RTE) dataset than the individuallylearned cell structures

  • Efficient Neural Architecture Search (ENAS) Models: Table 1 shows that our ENAS models perform better or equal than the nonarchitecture search based models

Read more

Summary

Introduction

Architecture search enables automatic ways of finding the best model architecture and cell structures for the given task or dataset, as opposed to the traditional approach of manually choosing or tuning among different architecture choices, which introduces human inductive bias or is nonscalable. We introduce a novel ‘continual architecture search’ (CAS) approach, where the model parameters evolves and adapts when trained sequentially on a new task while maintaining the performance on the previously learned tasks For enabling such continual learning, we formulate a two-step graphinitialization approach with conditions based on block sparsity and orthogonality. Another scenario of transfer learning or generalization that we explore is one in which we are given multiple tasks in parallel and have to learn a single cell that is good at all these tasks, and allows more generalizable transfer of the cell structure knowledge to a new unseen task. We achieve this by giving a joint reward from multiple tasks as feed-

Objectives
Methods
Results
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call