Influence maximization (IM) aims to find a group of influential nodes as initial spreaders to maximize the influence spread over a network. Yet, traditional IM algorithms have not been designed with fairness in mind, resulting in discrimination against some groups, like LGBTQ communities and racial minorities, etc. This issue has spurred research on Fair Influence Maximization (FIM). However, existing FIM studies come with some drawbacks. Firstly, most proposed notions of fairness for FIM cannot adjust the trade-off between fairness level and influence spread. Secondly, though a few specific notions of fairness allow such balancing, they are limited to a few specific concave functions, which may not be suitable for various real-world scenarios. Furthermore, none of them have studied the deep relations between the features of concave functions and the level of fairness. Thirdly, existing fairness metrics are limited to their corresponding concepts of fairness. Comparing the level of fairness across different algorithms using existing metrics can be challenging. To tackle the above problems, this paper first proposes a novel fairness notion named Poverty Reward (PR), which achieves fairness by rewarding the enrichment of groups with low utility. Based on PR, we further propose an algorithmic framework called Concave Fairness Framework (CFF) that allows any concave function that satisfies specific requirements. We also systematically clarify how fairness is improved by applying concave functions and provide an in-depth quantitative analysis of how to select appropriate concave functions for different utility distributions. Moreover, we propose the Reward of Fairness (RoF) metric that evaluates the disparity between groups. Based on RoF, an evaluation system is built to uniformly compare FIM algorithms from different fairness notions. Experiments in real-world datasets have demonstrated the validity of the CFF, as well as the proposed fairness notion.
Read full abstract