Abstract

There have been recent interest in meta-learning systems: i.e., networks that are trained to learn across multiple tasks. This paper focuses on optimization and generalization of a meta-learning system based on recurrent networks. The optimization investigates the influence of diverse structures and parameters on its performance. We demonstrate the generalization (robustness) of our meta-learning system to learn across multiple tasks including tasks unseen during the metatraining phase. We introduce a meta-cost function (Mean Squared Fair Error) that enhances the performance of the system by not penalizing it during transitions to learning a new task. Evaluation results are presented for Boolean and quadratic functions datasets. The best performance is obtained using a Long Short-Term Memory (LSTM) topology without a forget gate and with a clipped memory cell. The results demonstrate i) the impact of different LSTM architectures, parameters, and error functions on the meta-learning process; ii) that the mean squared fair error function does improve performance for best learning; and iii) the robustness of our meta-learning framework as it generalizes well when tested on tasks unseen during meta-training. Comparison between No-Forget-Gate LSTM and Gated Recurrent Unit also suggest that absence of a memory cell tends to degrade performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.