Abstract

ABSTRACT Recent research shows that automated decision-making systems based on Artificial Intelligence (AI) may lead to perceived unfairness and bias, especially in some sensitive areas. There is no universal debiasing solution for AI applications and systems. This paper proposes a bias-reducing framework based on contextual knowledge graphs for decision-making systems to help analyse and detect potential bias factors during system operation in near-real time. In particular, the contextual knowledge graph is designed to learn the relations between current tasks and corresponding features and explore the correlation among data, context and tasks. Three bias assessment metrics (i.e., label bias, sampling bias and timeliness bias) are proposed to measure, quantify and qualitatively define the bias level for the pre- and post-modelling. The trained model using debiased datasets combines contextual knowledge to support fairer decision-making. Experimental results show that the proposed method is more effective in supporting fairer decision- making than the existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call