Abstract

Prioritization of bugs decides the bug fix sequence. Incorrect prioritization of bugs results in delay of resolving the important bugs, which leads delay in release of the software. Prediction of bug priority needs historical data on which we can train the classifiers. However, such historical data is not always available in practice. In this situation, building classifiers based on cross project is the solution. In the available literature, we found very few papers for bug priority prediction and none of them dealt in cross project context. In this paper, we have evaluated the performance of different machine learning techniques namely Support Vector Machine, Naive Bayes (NB), K-Nearest Neighbors and Neural Network in predicting the priority of the newly coming reports in intra and cross project context. To evaluate the performance of these machine learning techniques for priority prediction in cross project context, we have considered three scenarios: (i) 10 fold cross-validation for intra project (ii) cross project validation for inter projects and (iii) inter project cross validation with different combination of training datasets. We performed experiments for each scenario on five datasets. Results from these experiments conclude that the accuracy performance for all machine learning techniques except NB is above 70, 72 and 73 % in respective scenarios. The experimental results also show that the combination of different project datasets for training candidates does not provide a significant improvement in performance measures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call