Abstract

Abstract In the rapidly-evolving landscape of software development, the detection of vulnerabilities in source code has become of paramount importance. Our study introduces a novel knowledge distillation (KD) technique aimed at enhancing vulnerability detection in software codebases. Using benchmark datasets such as SARD, SeVC, Devign, and D2A, we assess the prowess of the KD method when applied to different classifiers, specifically GPT2, CodeBERT, and LSTM. The empirical results are revealed a marked improvement in the performance of these classifiers upon the implementation of the KD technique, particularly with the GPT-2 model demonstrating the most promising outcomes. This work underscores the potential of integrating transformer-based learning models, like GPT-2, with knowledge distillation for more efficient and accurate vulnerability detection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call