Abstract

Security remains under-addressed in many organisations, illustrated by the number of large-scale software security breaches. Preventing breaches can begin during software development if attention is paid to security during the software’s design and implementation. One approach to security assurance during software development is to examine communications between developers as a means of studying the security concerns of the project. Prior research has investigated models for classifying project communication messages (e.g., issues or commits) as security related or not. A known problem is that these models are project-specific, limiting their use by other projects or organisations. We investigate whether we can build a generic classification model that can generalise across projects. We define a set of security keywords by extracting them from relevant security sources, dividing them into four categories: asset, attack/threat, control/mitigation, and implicit. Using different combinations of these categories and including them in the training dataset, we built a classification model and evaluated it on industrial, open-source, and research-based datasets containing over 45 different products. Our model based on harvested security keywords as a feature set shows average recall from 55 to 86%, minimum recall from 43 to 71% and maximum recall from 60 to 100%. An average f-score between 3.4 and 88%, an average g-measure of at least 66% across all the dataset, and an average AUC of ROC from 69 to 89%. In addition, models that use externally sourced features outperformed models that use project-specific features on average by a margin of 26–44% in recall, 22–50% in g-measure, 0.4–28% in f-score, and 15–19% in AUC of ROC. Further, our results outperform a state-of-the-art prediction model for security bug reports in all cases. We find using sound statistical and effect size tests that (1) using harvested security keywords as features to train a text classification model improve classification models and generalise to other projects significantly. (2) Including features in the training dataset before model construction improve classification models significantly. (3) Different security categories represent predictors for different projects. Finally, we introduce new and promising approaches to construct models that can generalise across different independent projects.

Highlights

  • Security breaches have become regular occurrences, with devastating consequences and costs to organisations and society (Ponemon-Institute, IBM-Security, 2017)

  • We have reported the best results and the summary statistics for transfer project prediction (TPP)

  • We have investigated an approach for training a text classification model to identify security messages in heterogeneous software project repositories such as issue tracking and version control systems

Read more

Summary

Introduction

Security breaches have become regular occurrences, with devastating consequences and costs to organisations and society (Ponemon-Institute, IBM-Security, 2017). Researchers (Cois & Kazman 2015; Cleland-Huang et al, 2006; Hindle et al, 2013; Ray et al, 2016) have investigated security concerns in software management repositories (e.g. Issue Trackers and Version Control Systems), seeking relevant quantitative measures that could be derived from security analysis of software management repositories. Such measures could assist project managers and development teams in taking informed decisions regarding the security posture of a project by providing answers to, e.g., How many security-related changes have been made in the system? Such measures could assist project managers and development teams in taking informed decisions regarding the security posture of a project by providing answers to, e.g., How many security-related changes have been made in the system? How many security-related bugs are left unresolved? What is the average window-of-exposure (in days) for securityrelated issues in a project? these studies are project specific, and we do not know how their results generalise beyond the environments studied

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call