Abstract

Code summarization aims to generate code summaries automatically, and has attracted a lot of research interest lately. Recent approaches to it commonly adopt neural machine translation techniques, which train a Seq2Seq model on a large corpus and assume it could work on various new code snippets. However, codes are highly varied in practice due to different domains, businesses or programming styles. Therefore, it is challenging to learn such a variety of patterns into a single model. In this paper, we propose a brand-new framework for code summarization based on meta-learning and code retrieval, named MLCS to tackle this issue. In this framework, the summarization of each target code is formalized as a few-shot learning task, where its similar examples are used as training data and the testing example is itself. We retrieve examples similar to the target code in a rank-and-filter manner. Given a neural code summarizer, we optimize it into a meta-learner via Model-Agnostic Meta-Learning (MAML). During inference, the meta-learner first adapts to the retrieved examples and yields an exclusive model for the target code, and then generates its summary. Extensive experiments on real-world datasets show: (1) Utilizing MLCS, a standard Seq2Seq model is able to outperform previous state-of-the-art approaches, including both neural models and retrieval-based neural models; (2) MLCS can flexibly adapt to existing neural code summarizers without modifying their architecture, and could significantly improve their performance with the relative gain of up to 112.7% on BLEU-4, 23.2% on ROUGE-L, and 31.5% on METEOR; (3) Compared to the existing retrieval-based neural approaches, MLCS can better leverage multiple similar examples, and shows better generalization ability on different retrievers, unseen retrieval corpus and low-frequency words.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call