Building learning systems possessing adaptive flexibility to different tasks is critical and challenging. In this article, we propose a novel and general meta-learning framework, called meta-modulation (MeMo), to foster the adaptation capability of a base learner across different tasks where only a few training data are available per task. For one independent task, MeMo proceeds like a "feedback regulation system", which achieves an adaptive modulation on the so-called definitive embeddings of query data to maximize the corresponding task objective. Specifically, we devise a type of efficient feedback information, definitive embedding feedback (DEF), to mathematize and quantify the unsuitability between the few training data and the base learner as well as the promising adjustment direction to reduce this unsuitability. The DEFs are encoded into high-level representation and temporarily stored as task-specific modulator templates by a modulation encoder. For coming query data, we develop an attention mechanism acting upon these modulator templates and combine both task/data-level modulation to generate the final data-specific meta-modulator. This meta-modulator is then used to modulate the query's embedding for correct decision-making. Our framework is scalable for various base learner models like multi-layer perceptron (MLP), long short-term memory (LSTM), convolutional neural network (CNN), and transformer, and applicable to different learning problems like language modeling and image recognition. Experimental results on a 2-D point synthetic dataset and various benchmarks in language and vision domains demonstrate the effectiveness and competitiveness of our framework.
Read full abstract