Abstract
The application of machine learning techniques to large and distributed data archives might result in the disclosure of sensitive information about the data subjects. Data often contain sensitive identifiable information, and even if these are protected, the excessive processing capabilities of current machine learning techniques might facilitate the identification of individuals, raising privacy concerns. To this end, we propose a decision-support framework for data anonymization, which relies on a novel approach that exploits data correlations, expressed in terms of relaxed functional dependencies (rfds) to identify data anonymization strategies providing suitable trade-offs between privacy and data utility. Moreover, we investigate how to generate anonymization strategies that leverage multiple data correlations simultaneously to increase the utility of anonymized datasets. In addition, our framework provides support in the selection of the anonymization strategy to apply by enabling an understanding of the trade-offs between privacy and data utility offered by the obtained strategies. Experiments on real-life datasets show that our approach achieves promising results in terms of data utility while guaranteeing the desired privacy level, and it allows data owners to select anonymization strategies balancing their privacy and data utility requirements.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.