Abstract

Improving the predictive performance of machine learning models is the desired goal in many tasks and domains. The predictive performance of the learning algorithm is directly affected by the input features it receives. Feature augmentation is aimed at enhancing the quality of models by adding informative features to the original data. Explainable AI methods are typically used to explain the results of machine learning models. Recently, these methods have also been used to improve models’ predictive performance. In this study, we examine the benefit of incorporating the explanations obtained by an explainable AI method as augmented features. In particular, we propose SFA — Shapley-Based feature augmentation, a two-stage ensemble learning method that uses out-of-fold predictions and their corresponding Shapley values as augmented features for each instance. Shapley values, which are obtained without domain expertise, reflect the importance of the original features to each prediction and consider their interactions with all other features. Experimental results demonstrate the superiority of our proposed method, SFA, against several feature augmentation methods on multiple public datasets with various characteristics.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.