Abstract
The publication of a patient’s dataset is essential for various medical investigations and decision-making. Currently, significant focus has been established to protect privacy during data publishing. The existing privacy models for multiple sensitive attributes do not concentrate on the correlation among the attributes, which in turn leads to much utility loss. An efficient model Heap Bucketization-anonymity (HBA) has been proposed to balance privacy and utility with multiple sensitive attributes. The Heap Bucketization-anonymity model used anatomization to vertically partition the dataset into 1. Quasi-identifier table and 2. Sensitive attribute table. The quasi-identifier is anonymized by implementing k-anonymity and slicing and the sensitive attributes are anonymized by applying slicing and Heap Bucketization. The metrics Normalized Certainty Penalty and KL-divergence have been used to compute the utility loss in the patient dataset. The experimental results show that the HB-anonymity can significantly achieve high privacy with less utility loss than other existing models. The HB-anonymity model not only balances the utility and privacy also eradicates the i) background knowledge attack, ii) quasi-identifier attack iii) membership attack, iv) non-membership attack and v) fingerprint correlation attack.
Highlights
Identifying the pattern of a particular individual by possessing background knowledge can lead to a fingerprint correlation attack
DIRECTION The paper has presented various related works on privacypreserving data publishing with multiple sensitive attributes (MSA)
An efficient model Heap Bucketization–Anonymity has been proposed to address the challenge of balancing the utility loss and privacy
Summary
Information is significant to the various innovations. To discover information, the data are retrieved and analyzed by the research community [1]. Jayapradha.J, Prakash.M: Heap Bucketization Anonymity-An Efficient Privacy-Preserving Data Publishing Model for Multiple. The privacy of the individual is protected from five breaches: (1) background knowledge attack (bka); (2) quasi-identifier attack (qia); (3) membership disclosure attack (mda); (4) non-membership disclosure attack (n-mda); and (5) fingerprint correlation attack (fca). Identifying the pattern of a particular individual by possessing background knowledge can lead to a fingerprint correlation attack. The fingerprint correlation attack is a strong privacy breach as it could snoop all the individual information in the dataset. Heap Bucketization –Anonymity model has been proposed and compared with two existing approaches (p,k)-angelization and (c,k) anonymization.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have