Abstract

Textual data, such as clinical notes, product or movie reviews in online stores, transcripts, chat records, and business documents, are widely collected nowadays and can be used to support a large spectrum of Big Data applications. At the same time, textual data, collected about individuals or from individuals, can be susceptible to inference attacks that may leak private and/or sensitive information about individuals.The increasing concerns of privacy risks in textual data preclude sharing or exchanging textual data across different parties/organizations for various applications such as record linkage, similar entity matching, natural language processing (NLP), or machine learning on large collections of textual data. This has led to the development of privacy preserving techniques for applying matching, machine learning or NLP techniques on textual data that contain personal and sensitive information about individuals. While cryptographic techniques are highly secure and accurate, they incur significant amount of computational cost for encoding and matching data – especially textual data – due to the complex nature of text.In this paper, we propose an efficient textual data encoding and matching algorithm using probabilistic techniques based on counting Bloom filters combined with Differential privacy. We apply our algorithm to a popular use case scenario that involves privacy preserving topic modeling – a widely used NLP technique – in order to identify common or collective topics in texts across multiple parties without learning the individual topics of each party, and show its effectiveness in supporting this application. Finally, through extensive experimental evaluation on three large text datasets against a state-of-the-art probabilistic encoding algorithm for privacy preserving LDA topic modelling, we show that our method provides a better privacy-utility trade-off at the cost of more computation complexity and memory space, while still being computationally efficient (log-linear complexity in the size of documents) for Big data compared to cryptographic techniques that have quadratic complexity.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.