Abstract

The demand for transparency and fairness in AI-based decision-making systems is constantly growing. Organisations need to be assured that their applications, based on these technologies, behave fairly, without introducing negative social implications in relation to sensitive attributes such as gender or race. Since the notion of fairness is context dependent and not uniquely defined, studies in the literature have proposed various formalisation. In this work, we propose a novel, flexible, discrimination-aware decision-tree that allows the user to employ different fairness criteria depending on the application domain. Our approach enhances decision-tree classifiers to provide transparent and fair rules to final users.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call