Abstract

AbstractThis work consolidates and compounds previous investigations in recognizing defects for infrastructure‐as‐code (IaC) scripts using general software development quality metrics with a focus on defect severity but adding to previous work an explorative look at creating datasets, which may boost the predictive power of provided models—we call this notion a fluid dataset. More specifically, we experiment with 50 different metrics harnessing a multiple dataset creation process whereby different versions of the same datasets are rigged with auto‐training facilities for model retraining and redeployment in a DataOps fashion. At this point, with a focus on the Ansible infrastructure code language—as a de facto standard for industrial‐strength infrastructure code—we build defect prediction models and manage to improve on the state of the art by finding an F1 score of 0.52 and a recall of 0.57 using a Naive–Bayes classifier. On the one hand, by improving state‐of‐the‐art defect prediction models using metrics generalizable for different IaC languages, we provide interesting leads for the future of infrastructure‐as‐code. On the other hand, we have barely scratched the surface on the novel approach of fluid‐datasets creation and automated retraining of Machine Learning (ML) defect prediction models, warranting for more research on the same direction in the future.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call