Abstract
SummaryPrivacy risk assessment plays a fundamental role in privacy preservation, as it determines the extent to which subsequent processing (such as generalization and obfuscation), should be applied to the sensitive data. However, most existing works on privacy risk assessment have focused on structured data, while unstructured text data remain relatively underexplored due to the complexity of natural language. In this article, we propose a novel method, PriTxt, for evaluating the privacy risk associated with text data by exploiting the semantic correlation. Using definitions derived from the General Data Protection Regulation (GDPR), a de facto standard of privacy preservation in practice, PriTxt first defines the private features that related to individual privacy in order to locate the sensitive words. By using the word2vec algorithm, a word‐embedding model is further constructed to identify the quasi‐sensitive words that are semantically correlated to the private features. The privacy risk of a given text is finally evaluated by aggregating the weighted risks of the sensitive and the quasi‐sensitive words in the text. Experiments on real‐world datasets demonstrate that the proposed PriTxt is effective for conducting risk assessment on text data and further outperforms the traditional methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Concurrency and Computation: Practice and Experience
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.