Abstract

EXplainable Artificial Intelligence (XAI) aims at improving users’ trust in black-boxed models by explaining their predictions. However, XAI techniques produced unreasonable explanations for software defect prediction since expected outputs (e.g., causes of bugs) were not captured by features used to build models. To set aside feature engineering limitations and evaluate whether XAI could adapt to developers, we exploit XAI for code smell prioritization (i.e., predicting criticalities of sub-optimal coding practices and design choices), whose features could capture developers’ major expectations. We assess the gap between XAI explanations and developers’ expectations in terms of (1) the accuracy of prediction, (2) the coverage of explanations on expectations, and (3) the complexity of explanations. We also narrow the gap by preserving the features related to developers’ expectations as much as possible in feature selection. We find that XAI can explain smells with simpler causes in top 3 to 5 features. Complex smells can be explained in around 10 features, which need more expertise to interpret. Selecting features adapting to the developers’ expectations improves coverage by 5% to 29%, with almost no negative impact on accuracy and complexity. Results also highlight the need of dividing coarse-grained prediction targets and developing fine-grained feature engineering.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call