Social networks especially the social communities facilitate the rapid and rich social activities of all individuals in the world. However, advanced community detection brings serious privacy disclosure (e.g., whether two important and sensitive individuals belong to the same community?) to us. For instance, online plainclothes policemen can be regarded as marginal community users who are often in the same community initially and in need of penetrating into as many as possible different communities to collect illegal evidence of network criminals, but adversarial community inference which can maliciously disclose the sensitive user relationships within a target community will expose their privacy and lead to task failure. Thus, privacy protection for the marginal community users becomes an urgent issue which is still open so far. In this work, we aim to study the community privacy protection for target marginal individuals of a community against multiple adversarial community detection (ACD) attacks. First, we define the marginal community user hiding problem and propose a marginal user pair selection strategy. Second, to enhance the privacy effectiveness of conventional methods, we propose a deep graph learning approach to maximally find the minimum link perturbation cost. Finally, we conduct various community detection attacks on many real social graphs, and the experimental results show that our method can more effectively hide the marginal-sensitive user pairs than baselines.
Read full abstract