A Structure-Aware Fair Recommendation Approach Based on Counterfactual Dynamic Hypergraphs
Unfair recommendations stem from user sensitive attributes and information transmission biases. Graph-structured data can provide more balanced information for fair recommendations by capturing multidimensional user-item interactions. However, graph-based fair recommendation still faces some challenges: Traditional graphs rely on static edge-connected topology, struggling to dynamically update many-to-many relationships, which impairs the long-term fairness modeling; Most existing graph mining algorithms overlook individual differences arising from filtered sensitive information, thereby exacerbating the fairness-accuracy trade-off; Hypergraph neural networks’ propagation relies on structural density, while sparse connections reduce it, leading to inaccurate representations in sparse regions and uneven diffusion. To address these issues, we propose a structure-aware fair recommendation approach based on counterfactual dynamic hypergraphs (FairCH). First, we propose a multidimensional user fairness model that captures many-to-many higher-order user-item relationships and their preference-fairness co-evolution via dynamic hypergraphs. Second, sensitive information is filtered through adversarial learning and counterfactual hyperedges is reconstructed by counterfactual reasoning, compensating for information loss. Finally, a cross-hierarchy structure-aware model is proposed, which extracts counterfactual fairness layers, global preference layers, and shared evolution layers from hypergraphs and integrates them via an inter-layer interactive attention mechanism to enhance information propagation and mitigate structural biases. Experimental results demonstrate that FairCH exhibits superior recommendation performance to the baselines.
- Research Article
3
- 10.1587/transinf.e95.d.143
- Jan 1, 2012
- IEICE Transactions on Information and Systems
For personalized search, a user must provide her personal information. However, this sometimes includes the user's sensitive information about individuals such as health condition and private lifestyle. It is not sufficient just to protect the communication channel between user and service provider. Unfortunately, the collected personal data can potentially be misused for the service providers' commercial advantage (e.g. for advertising methods to target potential consumers). Our aim here is to protect user privacy by filtering out the sensitive information exposed from a user's query input at the system level. We propose a framework by introducing the concept of query generalizer. Query generalizer is a middleware that takes a query for personalized search, modifies the query to hide user's sensitive personal information adaptively depending on the user's privacy policy, and then forwards the modified query to the service provider. Our experimental results show that the best-performing query generalization method is capable of achieving a low traffic overhead within a reasonable range of user privacy. The increased traffic overhead varied from 1.0 to 3.3 times compared to the original query.
- Book Chapter
2
- 10.1007/978-3-540-73549-6_113
- Jan 1, 2007
Presently, one can use services on the Internet. These services require user's sensitive information such as name, address, credit card number, etc. However, various privacy problems such as information leakage cases are becoming serious social concern. Therefore, we propose a framework to protect user's sensitive information. It allows a user to specify the usage of his/her sensitive information and restricts the use of information by an information recipient. The main concept of the framework is that an information recipient can use sensitive information only in the manner considered safe by the information owner. This is realized by a trusted program that implements the manner of information usage trusted by the information owner. The user offers his/her trusted program to an information recipient and requires to make use of the user's sensitive information through the trusted program. In this paper, we propose the approach for trusted program generation.
- Conference Article
- 10.1109/iucc-cit-dsci-smartcns57392.2022.00029
- Dec 1, 2022
In response to the current situation that social network users have little awareness of privacy protection, users will disclose their privacy information in the information they post when using social network platforms, in order to raise awareness of personal privacy protection among social network users and help them understand the importance of protecting their privacy information. Therefore, we propose a user multi-dimensional sensitive information portrait model based on social networks, use the TF-IDF algorithm based on bag-of-words model to calculate the sensitivity of sensitive information, classify sensitive information into high, medium and low sensitivity levels according to the importance of sensitive information to users, and carve a multi-dimensional sensitive information portrait of group users. By constructing two sensitive information dictionaries, using the improved FlashText algorithm combined with the regular expression string matching algorithm and the sure inverse order circular view matching algorithm to extract user sensitive information from the basic information of social network users and the historical data posted by users in social networks, and carving a multi-dimensional sensitive information portrait of users according to sensitive information and sensitivity, users can replace sensitive information according to their needs to achieve the purpose of user privacy protection. Through experimental evaluation, our scheme achieves an accuracy of 93.63% for the extraction of sensitive information.
- Book Chapter
3
- 10.1007/978-3-642-40099-5_34
- Jan 1, 2013
Social networking services (SNSs) support communication among people via the Internet. However, sensitive information about a user can be disclosed by the user's SNS friends. This makes it unsafe for a user to share information with friends in different groups. Moreover, a friend who has disclosed a user's information is difficult to identify. One approach to overcoming this problem is to anonymize the sensitive information in text to be posted by generalization, but most methods proposed for this approach are for information in a database. Another approach is to create different fingerprints for certain sensitive information by using various synonyms. However, the methods proposed for doing this do not anonymize the information. We have developed an algorithm for automatically creating enough anonymous fingerprints to cover most cases of SNSs containing sensitive phrases. The fingerprints are created using both generalization and synonymization. A different fingerprinted version of sensitive information is created for each friend that will receive the posted text. The fingerprints not only anonymize a user's sensitive information but also can be used to identify a person who has disclosed sensitive information about the user. Fingerprints are quantified using a modified discernability metric to ensure that an appropriate level of privacy is used for each group to receive the posted text. The use of synonyms ensures that an appropriate level of privacy is used for each group to receive the posted text. Moreover, a fingerprint cannot be converted by an attacker into one that causes the algorithm to incorrectly identify a person who has revealed sensitive information. The algorithm was demonstrated by using it in an application for controlling the disclosure of information on Facebook.
- Research Article
3
- 10.1080/09720502.2018.1495399
- Jul 4, 2018
- Journal of Interdisciplinary Mathematics
At present, the encryption process of simple one-dimensional chaotic system has limitations, which leads to the poor security of sensitive information in the network database. To solve this problem, in this paper, a secondary encryption algorithm based on NCA mapping was proposed for user’s information resource with spatio-temporal chaotic. By using the sensitivity, scrambling and randomness characteristics of initial value in dual chaotic system which was based on Logistic mapping combined with Henon mapping, the algorithm was applied to the generation of key, to make double encryption of plaintext. The dual chaotic system which combined Logistic mapping and Henon mapping was improved, and a new spatio-temporal chaotic system based on NCA mapping was constructed. NCA mapping was used to control the scrambling process of sensitive information, and the spatio-temporal chaotic sequence was used to change the diffusion process of sensitive information, to complete the encryption of user’s sensitive information resource in network database. The experimental results show that the proposed algorithm can effectively encrypt sensitive information and improve the security of network database operation.
- Conference Article
15
- 10.1109/icdew.2013.6547438
- Apr 1, 2013
In recent years, there has been rapid growth in mobile devices such as smartphones, and a number of applications are developed specifically for the smartphone market. In particular, there are many applications that are “free” to the user, but depend on advertisement services for their revenue. Such applications include an advertisement module - a library provided by the advertisement service - that can collect a user's sensitive information and transmit it across the network. Such information is used for targeted advertisements, and user behavior statistics. Users accept this business model, but in most cases the applications do not require the user's acknowledgment in order to transmit sensitive information. Therefore, such applications' behavior becomes an invasion of privacy. In our analysis of 1,188 Android applications' network traffic and permissions, 93% of the applications we analyzed connected to multiple destinations when using the network. 61% required a permission combination that included both access to sensitive information and use of networking services. These applications have the potential to leak the user's sensitive information. Of the 107,859 HTTP packets from these applications, 23,309 (22%) contained sensitive information, such as device identification number and carrier name. In an effort to enable users to control the transmission of their private information, we propose a system which, using a novel clustering method based on the HTTP packet destination and content distances, generates signatures from the clustering result and uses them to detect sensitive information leakage from Android applications. Our system does not require an Android framework modification or any special privileges. Thus users can easily introduce our system to their devices, and manage suspicious applications' network behavior in a fine grained manner. Our system accurately detected 94% of the sensitive information leakage from the applications evaluated and produced only 5% false negative results, and less than 3% false positive results.
- Conference Article
- 10.1109/mue.2007.64
- Apr 1, 2007
The evolution of mobile technologies will enable us to realize the ubiquitous computing environment. In such environment, a user's mobile terminal manages his sensitive information and assists in his activities. At the same time, information leakage will become more serious social problems. In this paper, we propose a framework which protects user's sensitive information according to a way the user supposes safe. In the framework, a user offers a program, which implements a way the user supposes safe, to an information recipient. And then, the information recipient makes use of the user's sensitive information through the program. In this manner, the user can protect his sensitive information. The framework, however, has a problem, by which the information recipient may analyze the program and obtain some sensitive information. In this paper, we introduce a tamper-proof device and trust relationship for a solution of this problem.
- Conference Article
- 10.1109/trustcom50675.2020.00238
- Dec 1, 2020
Nowadays, increasing Android applications attempt to obtain a large number of sensitive user information such as Contacts, SMS, Call logs, IMEI, IMSI without rational necessity, which has seriously threatened the privacy of users. However, the existing Android cannot effectively prevent the above risks. To solve this problem, this paper proposes a novel, user-driven sensitive information management model-UIDroid. UIDroid redefines the subject, object, definition of security level, legitimacy of operations, and system security status. With UIDroid, users could authorize the sub-functions of an application to access sensitive information with rational security levels based on essential requirements on the accuracy of the sensitive data. The prototype of UIDroid is developed to verify the feasibility of the UIDroid and compatibility with existing applications. Extensive experiments show that UIDroid can effectively prevent malicious applications from getting unnecessary sensitive user information with unnecessary accuracy. Meanwhile, the overall performance overhead introduced by UIDroid is less than 4.8%.
- Book Chapter
5
- 10.1007/978-3-319-02726-5_3
- Jan 1, 2013
Today web browsers have become the de facto platform for Internet users. This makes browsers the target of a lot of attacks. With the security considerations from the very beginning, Chrome offers more protection against exploits via benign-but-buggy extensions. However, more and more attacks have been launched via malicious extensions while there is no effective solution to defeat such malicious extensions. As user's sensitive information is often the target of such attacks, in this paper, we aim to proactively defeat information leakage with our iObfus framework. With iObfus, sensitive information is always classified and labeled automatically. Then sensitive information is obfuscated before any IO operation is conducted. In this way, the users' sensitive information is always protected even information leakage occurs. The obfuscated information is properly restored for legitimate browser transactions. A prototype has been implemented and iObfus works seamlessly with the Chromium 25. Evaluation against malicious extensions shows the effectiveness of iObfus, while it only introduces trivial overhead to benign extensions.
- Conference Article
- 10.1109/cisis.2015.11
- Jul 1, 2015
Social networks are used by millions of users every day and become part of our lives. However, privacy and security issues have been raised since user's sensitive information are exposed by several manners. User's sensitive information disclosure is not solved yet by current social networks. Thus several researches have been done in order to understand implications and users awareness. Different approaches and techniques are applied on every study resulting a heterogeneous set of information. In order to better understand the current state of the literature about user's privacy disclosure and security we conducted a review of published papers on IEEE, ACM and Science Direct using a systematic mapping. We evaluated 143 papers and classified 35 under our defined criteria to consider only studies with experiment on user privacy and security disclosure. Results indicate an academic preference to conduct questionnaire approaches aiming to explore user's awareness. We also notice that most of studies are limited to small groups and that research community is basically restricted to USA, Europe and Asia.
- Research Article
18
- 10.1016/j.knosys.2021.108058
- Jan 3, 2022
- Knowledge-Based Systems
Dual constraints and adversarial learning for fair recommenders
- Conference Article
6
- 10.1109/iske.2017.8258834
- Nov 1, 2017
Sensitive information of an Online Social Network (OSN) user can be discovered through sophisticated data mining, even if the user does not directly reveal such information. Malicious data miners can build a decision tree/forest from a data set having information about a huge number of OSN users and thereby learn general patterns which they can then use to discover the sensitive information of a target user who has not revealed the sensitive information directly. An existing technique called 3LP suggests users shall suppress some information (such as hometown), add and/or shall delete some friendship links to protect their sensitive information (such as political view). In a previous study, 3LP was applied to a training data set to discover the general pattern. It was then applied on a testing data set to protect sensitive information of the users in the testing data set. Once the testing data set was modified following the suggestions made by 3LP the previous study then cross-checked the users' privacy level by using the same general pattern previously discovered from the training data set. However, in this paper, we argue that the general pattern of the training data set will be changed due to the modifications made in the testing data set and hence, the new general pattern should be used to test the privacy level of the users in the testing data set. Therefore, in this study, we use a different attack model where the training data set is different after the initial use of 3LP and an attacker can use any classifiers in addition to decision forests. We also argue that the data utility should be measured along with the privacy level to evaluate the effectiveness of a privacy technique. We also experimentally compare 3LP with another existing method.
- Research Article
- 10.32604/cmc.2022.029002
- Jan 1, 2022
- Computers, Materials & Continua
A game measurement model considering the attacker's knowledge background is proposed based on the Bayesian game theory aiming at striking a balance between the protection of sensitive information and the quality of service. We quantified the sensitive level of information according to the user's personalized sensitive information protection needs. Based on the probability distribution of sensitive level and attacker's knowledge background type, the strategy combination of service provider and attacker was analyzed, and a game-based sensitive information protection model was constructed. Through the combination of strategies under Bayesian equilibrium, the information entropy was used to measure the leakage of sensitive information. Furthermore, in the paper the influence of the sensitive level of information and the attacker's knowledge background on the strategy of both sides of the game was considered comprehensively. Further on, the leakage of the user's sensitive information was measured. Finally, the feasibility of the model was described by experiments.
- Conference Article
10
- 10.1109/icdm.2019.00197
- Nov 1, 2019
Sharing ubiquitous mobile sensor data, especially physiological data, raises potential risks of leaking physical and demographic information that can be inferred from the time series sensor data. Existing sensitive information protection mechanisms that depend on data transformation are effective only on a particular sensitive attribute, together with usually requiring the labels of sensitive information for training. Considering this gap, we propose a novel user sensitive information protection framework without using a sensitive training dataset or being validated on protecting only one specific sensitive information. The presented approach transforms raw sensor data into a new format that has a "style" (sensitive information) of random noise and a "content" (desired information) of the raw sensor data, thus is free of user sensitive information for training and able to collectively protect all sensitive information at once. Our implementation and experiments on two real-world multisensor human activity datasets demonstrate that the proposed data transformation technique can achieve the protection for all sensitive information at once without requiring the knowledge of users' personal attributes for training, and simultaneously preserve the usability of the new transformed data with regard to inferring human activities with insignificant performance loss.
- Research Article
1
- 10.1145/3648683
- Jun 19, 2024
- ACM Transactions on Knowledge Discovery from Data
With the development of recommendation algorithms, researchers are paying increasing attention to fairness issues such as user discrimination in recommendations. To address these issues, existing works often filter users’ sensitive information that may cause discrimination during the process of learning user representations. However, these approaches overlook the latent relationship between items’ content attributes and users’ sensitive information. In this article, we propose DALFRec, a fairness-aware recommendation algorithm based on user-side and item-side adversarial learning to mitigate the effects of sensitive information on both sides of the recommendation process. First, we conduct a statistical analysis to demonstrate the latent relationship between items’ information and users’ sensitive attributes. Then, we design a dual-side adversarial learning network that simultaneously filters out users’ sensitive information on the user and item side. Additionally, we propose a new evaluation strategy that leverages the latent relationship between items’ content attributes and users’ sensitive attributes to better assess the algorithm’s ability to reduce discrimination. Our experiments on three real datasets demonstrate the superiority of our proposed algorithm over state-of-the-art methods.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.