Abstract

"Big Data" and data-mined inferences are affecting more and more of our lives, and concerns about their possible discriminatory effects are growing. Methods for discrimination-aware data mining and fairness-aware data mining aim at keeping decision processes supported by information technology free from unjust grounds. However, these formal approaches alone are not sufficient to solve the problem. In the present article, we describe reasons why discrimination with data can and typically does arise through the combined effects of human and machine-based reasoning, and argue that this requires a deeper understanding of the human side of decision-making with data mining. We describe results from a large-scale human-subjects experiment that investigated such decision-making, analyzing the reasoning that participants reported during their task to assess whether a loan request should or would be granted. We derive data protection by design strategies for making decision-making discrimination-aware in an accountable way, grounding these requirements in the accountability principle of the European Union General Data Protection Regulation, and outline how their implementations can integrate algorithmic, behavioral, and user interface factors.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.