Abstract
AbstractRecent research found that fine‐tuning pre‐trained models is superior to training models from scratch in just‐in‐time (JIT) defect prediction. However, existing approaches using pre‐trained models have their limitations. First, the input length is constrained by the pre‐trained models.Secondly, the inputs are change‐agnostic.To address these limitations, we propose JIT‐Block, a JIT defect prediction method that combines multiple input semantics using changed block as the fundamental unit. We restructure the JIT‐Defects4J dataset used in previous research. We then conducted a comprehensive comparison using eleven performance metrics, including both effort‐aware and effort‐agnostic measures, against six state‐of‐the‐art baseline models. The results demonstrate that on the JIT defect prediction task, our approach outperforms the baseline models in all six metrics, showing improvements ranging from 1.5% to 800% in effort‐agnostic metrics and 0.3% to 57% in effort‐aware metrics. For the JIT defect code line localization task, our approach outperforms the baseline models in three out of five metrics, showing improvements of 11% to 140%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.