Abstract

The transformation of software development from monolithic frameworks to microservices-based architectures, focusing on the challenges of creating a unified defect prediction model that spans various programming languages in practice of automating integration of code modification into a single codebase. It proposes a hybrid machine learning approach to enhance defect prediction accuracy by integrating different data sources and algorithms. The goal is to create a language and project-independent model. The hybrid model combines Bi-Directional LSTM (BiD-LSTM) networks and Attention mechanisms, static code metrics, and BERT-based language models. BiLSTM-Attention captures temporal dependencies within Abstract Syntax Trees (ASTs), static code metrics provide insights into software complexity, and BERT interprets textual context for a holistic understanding of code snippets. The research methodology involves quantitative techniques, starting with a literature review to establish the theoretical foundation. An empirical study follows, encompassing data gathering, feature crafting and pre-processing, model building, training and evaluation, validation and analysis and conclusions. The research’s insights aim to improve defect prediction techniques, contributing to software engineering’s pursuit of better quality and reliability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call