Abstract
In online social networks, users always expect to share some information for benefits (e.g., personalized services) while hiding the others for privacy. Unfortunately, the hidden information is likely to be predicted by various powerful inference attacks with the rapid advances in machine learning. Then, what is the risk that a user's private information could be disclosed? What countermeasures can be taken to fight against the privacy violation for the user? To tackle these issues, this article proposes a general Framework for Private Attribute Disclosure estimation (F-PAD) including three steps: 1) private attribute prediction; 2) disclosure model training; 3) disclosure risk estimation. Not like most prior risk estimation studies focusing on one specific attack model and private attribute, F-PAD can estimate disclosure risk for individual users in terms of disclosure probability and risk level within a high confidence given a basket of potential inference attack models; furthermore, F-PAD can adapt to various attributes (e.g., gender, age) and offer countermeasures to help users lower the risk. Extensive experiment studies on two real social network datasets, Facebook and Book-Crossing, have verified the effectiveness of F-PAD in ‘current city’, ‘gender’ and ‘age’ disclosure risk estimation.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Dependable and Secure Computing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.