AbstractThe incorporation of machine learning algorithms into high‐risk decision‐making tasks has raised some alarms in the scientific community. Research shows that machine learning‐based technologies can contain biases that cause unfair decisions for certain population groups. The fundamental danger of ignoring this problem is that machine learning methods can not only reflect the biases present in our society but could also amplify them. This article presents the design and validation of a technology to assist the fair automation of classification problems. In essence, the proposal is based on taking advantage of the intermediate solutions generated during the resolution of classification problems through using Auto‐ML tools, in particular, AutoGOAL, to create unbiased/fair classifiers. The technology employs a multi‐objective optimization search to find the collection of models with the best trade‐offs between performance and fairness. To solve the optimization problem, we introduce a combination of Probabilistic Grammatical Evolution Search and NSGA‐II. The technology was evaluated using the Adult dataset from the UCI repository, a common benchmark in related research. Results were compared with other published results in scenarios with single and multiple fairness definitions. Our experiments demonstrate the technology's ability to automate classification tasks while incorporating fairness constraints. Additionally, our method achieves competitive results against other bias mitigation techniques. A notable advantage of our approach is its minimal requirement for machine learning expertise, thanks to its Auto‐ML foundation. This makes the technology accessible and valuable for advancing fairness in machine learning applications. The source code is available online for the research community.
Read full abstract