ABSTRACTPatches can help fix security vulnerabilities and optimize software performance, thereby enhancing the quality and security of the software. Unfortunately, patches generated by automated program repair tools are not always correct, as they may introduce new bugs or fail to fully rectify the original issue. Various methods for evaluating patch correctness have been proposed. However, most methods face the challenge of capturing long‐distance dependencies in patch correctness evaluation, which leads to a decline in the predictive performance of the models. To address the challenge, this paper presents a method named Qamhaen to evaluate the correctness of patches generated by APR. Specifically, text embedding of bugs and patches component address the challenge of long‐distance dependencies across functions in patch correctness evaluation by using bug reports and patch descriptions as inputs instead of code snippets. BERT is employed for pretraining to capture these dependencies, followed by an additional multihead self‐attention mechanism for further feature extraction. Similarity evaluator component devises a similarity calculation to assess the effectiveness of patch descriptions in resolving issues outlined in bug reports. Comprehensive experiments are conducted on a dataset containing 9135 patches and a patch correctness assessment metric, and extensive experiments demonstrate that Qamhaen outperforms baseline methods in terms of overall performance across AUC, F1, +Recall, ‐Recall, and Precision. For example, compared to the baseline, Qamhaen achieves an F1 of 0.691, representing improvements of 24.2%, 22.1%, and 6.3% over the baseline methods, respectively.
Read full abstract