A great variety of mechanisms have been proposed to protect structured databases with numerical and categorical attributes; however, little attention has been devoted to unstructured textual data. Textual data protection requires first detecting sensitive pieces of text and then masking those pieces via suppression or generalization. Current solutions rely on classifiers that can recognize a fixed set of (allegedly sensitive) named entities. Yet, such approaches fall short of providing adequate protection because in reality references to sensitive information are not limited to a predefined set of entity types, and not all the appearances of certain entity type result in disclosure. In this work we propose a more general and flexible based on the notion of word embedding. By means of word embeddings we build vectors that numerically capture the semantic relationships of the textual terms. Then we evaluate the disclosure caused by the terms on the entity to be protected according to the similarity between their vector representations. Our method also preserves the semantics (and, therefore, the utility) of the document by replacing risky terms with privacy-preserving generalizations. Empirical results show that our approach offers much more robust protection and greater utility preservation than methods based on named entity recognition.
Read full abstract