Abstract

The increasingly intensive use of software systems in diverse sectors, especially in business, government, healthcare, and critical infrastructures, makes it essential to deliver code that is secure. In this work, we present two sets of experiments aiming at helping developers to improve software security from the early development stages. The first experiment is focused on using software metrics to build prediction models to distinguish vulnerable from non-vulnerable code. The second experiment studies the hypothesis of developing a consensus-based decision-making approach on top of several machine learning-based prediction models, trained using software metrics data to categorize code units with respect to their security. Such categories suggest a priority (ranking) of software code units based on the potential existence of security vulnerabilities. Results show that software metrics do not constitute sufficient evidence of security issues and cannot effectively be used to build a prediction model to distinguish vulnerable from non-vulnerable code. However, with a consensus-based decision-making approach, it is possible to classify code units from a security perspective, which allows developers to decide (considering the criticality of the system under development and the available resources) which parts of the software should be the focal point for the detection and removal of security vulnerabilities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call