The widespread implementation of machine learning in safety-critical domains has raised ethical concerns regarding algorithmic discrimination. In such settings, the integration of fairness-aware algorithms with uncertainty quantification tools enables the development of reliable and safe decision-making. In this paper, we introduce a novel methodology that combines conformal prediction, offering rigorous prediction sets, with multi-objective optimization via evolutionary learning. The proposed meta-algorithm optimizes the hyperparameter configuration of classifiers to produce confidence predictors that balance efficiency and equalized coverage guarantees, addressing fairness concerns related to sensitive attributes. We empirically evaluate our methodology with four real-world problems and demonstrate its efficacy in exploring this trade-off and producing a repertoire of Pareto optimal conformal predictors. In this way, our contribution offers different modeling alternatives from which to choose depending on the policy adopted by stakeholders, thus illustrating its capability to enhance equitable decision-making.
Read full abstract