Abstract

Adapting deep learning (DL) techniques to automate nontrivial coding activities, such as code documentation and defect detection, has been intensively studied recently. Learning to predict code changes is one of the popular and essential investigations. Prior studies have shown that DL techniques, such as neural machine translation (NMT), can benefit meaningful code changes, including bug fixing and code refactoring. However, NMT models may encounter bottleneck when modeling long sequences; thus, they are limited in accurately predicting code changes. In this article, we design a Transformer-based approach, considering that the Transformer has proven effective in capturing long-term dependencies. Specifically, we propose a novel model named DTrans. For better incorporating the local structure of code, i.e., statement-level information in this article, DTrans is designed with dynamically relative position encoding in the multihead attention of the Transformer. Experiments on benchmark datasets demonstrate that DTrans can more accurately generate patches than the state-of-the-art methods, increasing the performance by at least 5.45–46.57% in terms of the exact match metric on different datasets. Moreover, DTrans can locate the lines to change with 1.75–24.21% higher accuracy than the existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call