Abstract

Machine learning models are built using training data, which is collected from human experience and is prone to bias. Humans demonstrate a cognitive bias in their thinking and behavior, which is ultimately reflected in the collected data. From Amazon’s hiring system, which was built using ten years of human hiring experience, to a judicial system that was trained using human judging practices, these systems all include some element of bias. The best machine learning models are said to mimic humans’ cognitive ability, and thus such models are also inclined towards bias. However, detecting and evaluating bias is a very important step for better explainable models. In this work, we aim to explain bias in learning models in relation to humans’ cognitive bias and propose a wrapper technique to detect and evaluate bias in machine learning models using an openly accessible dataset from UCI Machine Learning Repository. In the deployed dataset, the potentially biased attributes (PBAs) are gender and race. This study introduces the concept of alternation functions to swap the values of PBAs, and evaluates the impact on prediction using KL divergence. Results demonstrate females and Asians to be associated with low wages, placing some open research questions for the research community to ponder over.

Highlights

  • Machine learning models are built using training data, which is collected from human experience and is prone to bias

  • The problem is: How can we be sure about the presence of bias until we detect it and quantify it [35]? we propose a technique to find out whether an attribute can be potentially biased attributes (PBAs) toward the classes or not

  • We aim to find the impact of the PBA on the model’s prediction using the alternation function

Read more

Summary

Introduction

Machine learning models are built using training data, which is collected from human experience and is prone to bias. Humans demonstrate a cognitive bias in their thinking and behavior, which is reflected in the collected data. The best machine learning models are said to mimic humans’ cognitive ability, and such models are inclined towards bias. We aim to explain bias in learning models in relation to humans’ cognitive bias and propose a wrapper technique to detect and evaluate bias in machine learning models using an openly accessible dataset from UCI Machine Learning Repository. Intelligent hiring systems learn the behavior from the hiring practices embedded in the training data fed to the model. In order to make a decision, the enormous amount of information residing in a human’s brain is filtered, and only relevant information is used for decision-making

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call