Geographically aggregated data are often considered to be safe because information can be published by group as population counts rather than by individual. Identifiable information about individuals can still be disclosed when using such data, however. Conventional methods for protecting privacy, such as data swapping, often lack transparency because they do not quantify the reduction in disclosure risk. Recent methods, such as those based on differential privacy, could significantly compromise data utility by introducing excessive error. We develop a methodological framework to address the issues of privacy protection for geographically aggregated data while preserving data utility. In this framework, individuals at high risk of disclosure are moved to other locations to protect their privacy. Two spatial optimization models are developed to optimize these moves by maximizing privacy protection while maintaining data utility. The first model relocates all at-risk individuals while minimizing the error (hence maximizing the utility). The second model assumes a budget that specifies the maximum error to be introduced and maximizes the number of at-risk individuals being relocated within the error budget. Computational experiments performed on a synthetic population data set of two counties of Ohio indicate that the proposed models are effective and efficient in balancing data utility and privacy protection for real-world applications.
Read full abstract