Sheep's clothing, wolfish intent: Automated detection and evaluation of problematic 'allowed' advertisements

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

The digital advertising ecosystem sustains the free web and drives global innovation, but often at the cost of user privacy through intrusive tracking and non-compliant ads, especially harmful to under-age users. This has led to widespread adoption of privacy tools like adblockers and anti-trackers, which, while disrupting ad revenues, expose users to alternate forms of tracking and fingerprinting. To address this, many adblockers now allow 'non-intrusive' ads by default. In this study, we evaluate Adblock Plus's Acceptable Ads feature and find a 13.6% increase in problematic ads compared to no adblocker use—challenging claims of improved user experience. We also find that ad exchanges on allowlists are more likely to serve problematic content, underscoring the hidden cost privacy-aware users pay when relying on such technologies. While prior work in the domain has been limited by their practical viability, we further propose a methodology to automate the detection of problematic ads using LLMs with zero-shot prompting, achieving substantial agreement with human annotators (IAA score: 0.79). This establishes the efficacy of LLMs in problematic content detection under well-defined environments. As In-browser LLMs emerge, adversaries may exploit problematic ad content to fingerprint privacy-conscious ABP users. At the same time, these advances present new opportunities for adblockers to develop robust defenses, detect malicious exchanges, and uphold both user privacy and the sustainability of the ad-supported web.

Similar Papers
  • Conference Article
  • Cite Count Icon 1
  • 10.1109/icitcs.2016.7740332
FARIS: Fast and Memory-Efficient URL Filter by Domain Specific Machine
  • Sep 1, 2016
  • Yuuki Takano + 1 more

Uniform resource locator (URL) filtering is a fundamental technology for intrusion detection, HTTP proxies, content distribution networks, content-centric networks, and many other application areas. Some applications adopt URL filtering to protect user privacy from malicious or insecure websites. AdBlock Plus is an example of a URL-filtering application, which filters sites that intend to steal sensitive information. Unfortunately, AdBlock Plus is implemented inefficiently, resulting in a slow application that consumes much memory. Although it provides a domain-specific language (DSL) to represent URLs, it internally uses regular expressions and does not take advantage of the benefits of the DSL. In addition, the number of filter rules become large, which makes matters worse. In this paper, we propose the fast uniform resource identifier-specific filter, which is a domain-specific pseudo-machine for the DSL, to improve the performance of AdBlock Plus. Compared with a conventional implementation that internally adopts regular expressions, our proof-of-concept implementation is fast and small memory footprint.

  • Conference Article
  • Cite Count Icon 1
  • 10.1063/1.5039088
Big data privacy protection model based on multi-level trusted system
  • Jan 1, 2018
  • Nan Zhang + 2 more

This paper introduces and inherit the multi-level trusted system model that solves the Trojan virus by encrypting the privacy of user data, and achieve the principle: “not to read the high priority hierarchy, not to write the hierarchy with low priority”. Thus ensuring that the low-priority data privacy leak does not affect the disclosure of high-priority data privacy. This paper inherits the multi-level trustworthy system model of Trojan horse and divides seven different risk levels. The priority level 1∼7 represent the low to high value of user data privacy, and realize seven kinds of encryption with different execution efficiency Algorithm, the higher the priority, the greater the value of user data privacy, at the expense of efficiency under the premise of choosing a more encrypted encryption algorithm to ensure data security. For enterprises, the price point is determined by the unit equipment users to decide the length of time. The higher the risk sub-group algorithm, the longer the encryption time. The model assumes that users prefer the lower priority encryption algorithm to ensure efficiency. This paper proposes a privacy cost model for each of the seven risk subgroups. Among them, the higher the privacy cost, the higher the priority of the risk sub-group, the higher the price the user needs to pay to ensure the privacy of the data. Furthermore, by introducing the existing pricing model of economics and the human traffic model proposed by this paper and fluctuating with the market demand, this paper improves the price of unit products when the market demand is low. On the other hand, when the market demand increases, the profit of the enterprise will be guaranteed under the guidance of the government by reducing the price per unit of product. Then, this paper introduces the dynamic factors of consumers’ mood and age to optimize. At the same time, seven algorithms are selected from symmetric and asymmetric encryption algorithms to define the enterprise costs at different levels. Therefore, the proposed model solves the continuous influence caused by cascading events and ensures that the disclosure of low-level data privacy of users does not affect the high-level data privacy, thus greatly improving the safety of the private information of user.

  • Conference Article
  • 10.2991/ncce-18.2018.18
Big Data Privacy Protection Model Based on Multi-level Trusted System
  • Jan 1, 2018
  • Hongfeng Han + 2 more

This paper introduces and inherit the multi-level trusted system model that solves the Trojan virus by encrypting the privacy of user data, and achieve the principle: “not to read the high priority hierarchy, not to write the hierarchy with low priority”. Thus ensuring that the low-priority data privacy leak does not affect the disclosure of high-priority data privacy. This paper inherits the multi-level trustworthy system model of Trojan horse and divides seven different risk levels. The priority level 1∼7 represent the low to high value of user data privacy, and realize seven kinds of encryption with different execution efficiency Algorithm, the higher the priority, the greater the value of user data privacy, at the expense of efficiency under the premise of choosing a more encrypted encryption algorithm to ensure data security. For enterprises, the price point is determined by the unit equipment users to decide the length of time. The higher the risk sub-group algorithm, the longer the encryption time. The model assumes that users prefer the lower priority encryption algorithm to ensure efficiency. This paper proposes a privacy cost model for each of the seven risk subgroups. Among them, the higher the privacy cost, the higher the priority of the risk sub-group, the higher the price the user needs to pay to ensure the privacy of the data. Furthermore, by introducing the existing pricing model of economics and the human traffic model proposed by this paper and fluctuating with the market demand, this paper improves the price of unit products when the market demand is low. On the other hand, when the market demand increases, the profit of the enterprise will be guaranteed under the guidance of the government by reducing the price per unit of product. Then, this paper introduces the dynamic factors of consumers’ mood and age to optimize. At the same time, seven algorithms are selected from symmetric and asymmetric encryption algorithms to define the enterprise costs at different levels. Therefore, the proposed model solves the continuous influence caused by cascading events and ensures that the disclosure of low-level data privacy of users does not affect the high-level data privacy, thus greatly improving the safety of the private information of user.

  • Research Article
  • 10.30837/rt.2024.4.219.03
Ensuring user anonymity in online surveys
  • May 29, 2025
  • Radiotekhnika
  • I.V Oleshko + 1 more

The paper addresses the pressing issue of ensuring user anonymity during online surveys. The privacy risks associated with JavaScript are analyzed, which, on the one hand, provides interactivity and convenience of surveys, and on the other hand, creates a threat of data leakage through trackers and third-party scripts. It is noted that many survey platforms integrate third-party scripts that can collect users' personal information without their consent. The paper provides an overview of existing privacy protection tools. It was found that the most popular tool among users is the Adblock Plus browser extension. The most popular platforms for online surveys (Google Forms, SurveyMonkey, Typeform, Xoyondo and others) are analyzed in terms of their ability to ensure user privacy. It was found that the level of protection varies from completely blocking scripts to their almost unlimited use. An experimental study was conducted on the Windows 11 Pro x64 operating system using the IDE PyCharm 2024.3.1.1 (Community Edition) for writing Python code. It was found that the Ghostery tool is capable of blocking up to 87% of third-party scripts that could potentially affect user anonymity. It is shown that the Google Forms platform provides the best level of user anonymity among the considered applications. Further research on user anonymity applying machine learning methods is relevant and necessary, as it will enable more effective identification of obfuscated code.

  • Book Chapter
  • 10.1093/oso/9780198891420.003.0007
Privacy Fixing and Other Forms of Anticompetitive Cooperation on Privacy
  • Jul 9, 2024
  • Samson Y Esayas

This chapter probes a new ground, critically examining the existence and potential of collaborative efforts that could restrict competition in data privacy, in violation of Article 101 of the Treaty on the Functioning of the European Union'. While the legal precedents and enforcement actions under Article 101 typically focus on price fixing, collaborative efforts can be equally harmful if applied to non-price parameters such as quality, choice, or innovation. As data privacy gains increasing recognition as a non-price competition parameter, it is attracting scrutiny as a potential arena for collusive practices. In an antitrust lawsuit led by Texas, Google stands accused of ‘privacy fixing’ or coordinating with rivals to harm user privacy in violation of competition laws. Similar concerns have been raised by some EU member states regarding an agreement between Google and Eyeo, the owner of anti-tracking and ad-blocking software Adblock Plus. Using these examples, the chapter seeks to shed light on the emerging concept of privacy fixing and its place in antitrust, including why digital companies may find it worthwhile to fix the level of privacy, how this may harm consumers, and the extent to which competition law can tackle such harms. The chapter further investigates some of the ways in which standard setting on Internet communications and privacy conditions can negatively impact competition on data privacy. Specifically, it explores how such standards might supress information competition and impose a ceiling on the level of data privacy protection offered.

  • Conference Article
  • Cite Count Icon 67
  • 10.1145/1297231.1297233
Private distributed collaborative filtering using estimated concordance measures
  • Oct 19, 2007
  • Neal Lathia + 2 more

Collaborative filtering has become an established method to measure users' similarity and to make predictions about their interests. However, prediction accuracy comes at the cost of user's privacy: in order to derive accurate similarity measures, users are required to share their rating history with each other. In this work we propose a new measure of similarity, which achieves comparable prediction accuracy to the Pearson correlation coefficient, and that can successfully be estimated without breaking users' privacy. This novel method works by estimating the number of concordant, discordant and tied pairs of ratings between two users with respect to a shared random set of ratings. In doing so, neither the items rated nor the ratings themselves are disclosed, thus achieving strictly-private collaborative filtering. The technique has been evaluated using the recently released Netflix prize dataset.

  • Conference Article
  • 10.1109/icdcs54860.2022.00023
Amanuensis: provenance, privacy, and permission in TEE-enabled blockchain data systems
  • Jul 1, 2022
  • Taylor Hardin + 1 more

Blockchain technology is heralded for its ability to provide transparent and immutable audit trails for data shared among semi-trusted parties. With the addition of smart contracts, blockchains can track and verify arbitrary computations – which enables blockchain users to verify the provenance of information derived from data through the blockchain. This provenance comes at the cost of data confidentiality and user privacy, however, which is unacceptable for many sensitive applications. The need for verifiable yet confidential data sharing and computation has led some to add trusted execution environment (TEE) hardware to blockchain platforms. By moving sensitive operations (e.g., data decryption and analysis) off of the blockchain and into a TEE, they get both the confidentiality of TEEs and the transparency of blockchains without the need to completely trust any one party in the data-sharing ecosystem.In this paper, we build on our TEE-enabled blockchain data-sharing system, Amanuensis, to ensure the freshness of access-control lists shared between the blockchain and TEE, and to improve the privacy of users interacting within the system. We also detail how TEE-based remote attestation help us to achieve information provenance – specifically, how to achieve information provenance in the context of the Intel SGX trusted execution environment. Finally, we present an evaluation of our system, in which we test several real-world machine-learning applications (logistic regression, kNN, SVM) to determine the run-time overhead of information confidentiality and provenance. Each machine-learning program exhibited a slowdown between 1.1 and 2.8x when run inside of our confidential environment, and took an average of 59 milliseconds to verify the provenance of an input data set.

  • Research Article
  • Cite Count Icon 11
  • 10.2139/ssrn.2265026
The Cost of Lost Privacy: Search, Antitrust and the Economics of the Control of User Data
  • Jan 1, 2013
  • SSRN Electronic Journal
  • Nathan Newman

The Cost of Lost Privacy: Search, Antitrust and the Economics of the Control of User Data

  • Dissertation
  • 10.17760/d20321280
On the privacy implications of real time bidding
  • May 10, 2021
  • Muhammad Admad Bashir

The massive growth of online advertising has created a need for commensurate amounts of user tracking. Advertising companies track online users extensively to serve targeted advertisements. On the surface, this seems like a simple process: a tracker places a unique cookie in the user's browser, repeatedly observes the same cookie as the user surfs the web, and finally uses the accrued data to select targeted ads. However, the reality is much more complex. The rise of Real Time Bidding (RTB) has forced the Advertising and Analytics (A&A) companies to collaborate more closely with one another, to exchange data about users to facilitate bidding in RTB auctions. The amount of information-sharing is further exacerbated by how real-time auctions are implemented. During an auction, several A&A companies observe user impressions as they receive bid requests, even though only one of them eventually wins the auction and serves the advertisement. This significantly increases the privacy digital footprint of the user. Because of RTB, tracking data is not just observed by trackers embedded directly into web pages, but rather it is funneled through the advertising ecosystem through complex networks of exchanges and auctions. Numerous surveys have shown that web users are not completely aware of the amount of data sharing that occurs between A&A companies, and thus underestimate the privacy risks associated with online tracking. To accurately quantify users' privacy digital footprint, we need to take into account the information-sharing that happens either to facilitate RTB auctions or as a consequence of them. However, measuring these flows of tracking information is challenging. Although there is prior work on detecting information-sharing (cookie matching) between A&A companies, these studies are based on brittle heuristics that cannot detect all forms of information-sharing (e.g., server-side matching), especially under adversarial conditions (e.g., obfuscation). This limits our view of the privacy landscape and hinders the development of effective privacy tools. The overall goal of my thesis is to understand the privacy implications of Real Time Bidding, to bridge the divide between the actual privacy landscape and our understanding of it. To that end, I propose methods and tools to accurately map information-sharing among A&A domains in the modern ad ecosystem under RTB. First, I propose a content-agnostic methodology that can detect client- and server-side information flows between arbitrary A&A domains using retargeted ads. Intuitively, this methodology works because it relies on the semantics of how exchanges serve ads, rather than focusing on specific cookie matching mechanisms. Using crawled data on 35,448 ad impressions, I show that this methodology can successfully categorize four different kinds of information-sharing behaviors between A&A domains, including cases where existing heuristic methods fail. Next, in order to capture the effects of ad exchanges during RTB auctions accurately, I isolate a list of A&A domains that act as ad exchanges during the bidding process. Identifying such A&A domains is crucial, since they can disperse user impressions to multiple other A&A domains to solicit bids. I achieve this by conducting a longitudinal analysis of a transparency standard called ads.txt, which was introduced to combat ad fraud by helping ad buyers verify authorized digital ad sellers. In particular, I conduct a 15-months longitudinal study of the standard to gather a list of A&A domains that are labeled as ad exchanges (authorized sellers) by publishers in their ads.txt files. Through my analysis on Alexa Top-100K, I observed that over 60% of the publishers who run RTB ads have adopted the ads.txt standard. This widespread adoption allowed me to explicitly identify over 1,000 A&A domains belonging to ad exchanges. Finally, I use the list of ad exchanges from ads.txt along with the information flows between A&A companies collected using my generic methodology to build an accurate model of the privacy digital footprint of web users. In particular, I use these data sources to model the advertising ecosystem in the form of a graph called an Inclusion graph. Through simulations on the Inclusion graph, I provide upper and lower estimates on the tracking information observed by A&A companies. I show that the top 10% A&A domains observe at least 91% of an average user's browsing history under reasonable assumptions about information-sharing within RTB auctions. I also evaluate the effectiveness of blocking strategies (e.g., AdBlock Plus) and find that major A&A domains still observe 40-90% of user impressions, depending on the blocking strategy. Overall, in this dissertation, I propose new methodologies to understand the privacy implications of Real Time Bidding. The proposed methods can be used to shed light on the opaque ecosystem of programmatic advertising and enable users to gain a more accurate view of their digital footprint. Furthermore, the results of this thesis can be used to build better or enhance existing privacy-preserving tools.

  • PDF Download Icon
  • Research Article
  • 10.54254/2753-7064/2024.17862
Advertising Breaks the Privacy of WeChat Circle of Friends Analysis
  • Dec 9, 2024
  • Communications in Humanities Research
  • Tingxian Huang

With the rapid development of Internet technology and the popularity of smart phones, social media has become an indispensable part of People's Daily life. WeChat, one of the largest social media platforms in China, has gained popularity for its moments function due to its high degree of privacy and user viscosity. However, with the rise of the social media advertising market, the frequent appearance of advertising messages in WeChat moments not only disrupts the social experience of users, but also may cause users to worry about privacy leakage. This study aims to deeply analyze how advertisement delivery breaks the privacy of WeChat circle of friends, and explore the possibility of optimizing advertisement delivery strategy, including delivery quantity, delivery characteristics, target audience positioning and so on. Through the optimization strategy, the purpose is to reduce the intrusion of user privacy, improve user experience, and achieve a win-win situation between business interests and user privacy protection. Using questionnaire survey and hot list comparison, the study analyzed the impact of advertising on users' privacy perception, attitude and behavior. Suppose the results show that AD delivery breaks the privacy of WeChat moments to some extent, and users are put off by frequent and non-personalized ads. The study also found that gender, age, occupation and other factors have an impact on the perception of privacy in WeChat circle of friends. Based on the research results, marketing strategies and content suggestions for different user groups are proposed, in order to better safeguard users' privacy rights and social experience while ensuring the effect of advertising delivery and promote the sustainable development of social media platforms and advertising industry.

  • Conference Article
  • Cite Count Icon 3
  • 10.1109/allerton.2019.8919800
Data Collection from Privacy-Aware Users in the Presence of Social Learning
  • Sep 1, 2019
  • Abdullah Basar Akbay + 2 more

We study a model where a data collector obtains data from users through a payment mechanism to learn the underlying state from the elicited data. The private signal of each user represents her individual knowledge about the state. Through social interactions, each user can also learn noisy versions of her friends’ signals, which is called group signals. Based on both her private signal and group signals, each user makes strategic decisions to report a privacy-preserved version of her data to the data collector. We develop a Bayesian game theoretic framework to study the impact of social learning on users’ data reporting strategies and devise the payment mechanism for the data collector accordingly. Our findings reveal that, the Bayesian-Nash equilibrium can be in the form of either a symmetric randomized response (SR) strategy or an informative non-disclosive (ND) strategy. A generalized majority voting rule is applied by each user to her noisy group signals to determine which strategy to follow. When a user plays the ND strategy, she reports privacy-preserving data completely based on her group signals, independent of her private signal, which indicates that her privacy cost is zero. Both the data collector and the users can benefit from social learning which drives down the privacy costs and helps to improve the state estimation at a given payment budget. We derive bounds on the minimum total payment required to achieve a given level of state estimation accuracy.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 4
  • 10.3390/app13053191
Effective Techniques for Protecting the Privacy of Web Users
  • Mar 2, 2023
  • Applied Sciences
  • Maryam Bubukayr + 1 more

With the rapid growth of web networks, the security and privacy of online users are becoming more compromised. Especially, the use of third-party services to track users’ activities and improve website performance. Therefore, it is unavoidable that using personal information to create unique profiles may violate individuals’ privacy. Recently, several tools have been developed such as anonymity, anti-tracking, and browser plugins to ensure the protection of users from third-party tracking methods by blocking JavaScript programs and other website components. However, the current state lacks an efficient approach that provides a comprehensive solution. In this paper, we conducted a systematic analysis of the most common privacy protection tools based on their accuracy and performance by evaluating their effectiveness in correctly classifying tracking and functional JavaScript programs, then evaluating the estimated time the browser takes to render the pages for each tool. To achieve this, we automatically browsed the most 50 websites determined in 2022 and categorized them according to different fields to get the in-page (as part of HTML script tags), and all external JavaScript programs. Then we collected data and datasets of 1578 JavaScript elements and obtained six diverse Firefox profiles when the tools were enabled. The results found that Ghostery has the highest percentage of allowing most functioning scripts with the lowest average error rate (AER). While at the same time NoScript achieved the highest percentage of blocking most tracking scripts since it is the highest blocker of third-party services. After that, we examined the speed of the browser finding that, Ghostery improved the load time by 36.2% faster than the baseline, while Privacy Badger only reduced the load time by 7.1%. We believe that our findings can help users decide on a privacy tool that meets their needs. Moreover, researchers and developers can use our findings to improve the privacy of internet users by designing more effective privacy protection techniques.

  • Research Article
  • Cite Count Icon 27
  • 10.1109/jiot.2021.3058209
A Blockchain-Based Approach for Saving and Tracking Differential-Privacy Cost
  • Feb 2, 2021
  • IEEE Internet of Things Journal
  • Yang Zhao + 6 more

An increasing amount of users' sensitive information is now being collected for analytics purposes. Differential privacy has been widely studied in the literature to protect the privacy of users' information. The privacy parameter bounds the information about the data set leaked by the noisy output. Oftentimes, a data set needs to be used for answering multiple queries, so the level of privacy protection may degrade as more queries are answered. Thus, it is crucial to keep track of privacy budget spending, which should not exceed the given limit of privacy budget. Moreover, if a query has been answered before and is asked again on the same data set, we may reuse the previous noisy response for the current query to save the privacy cost. In view of the above, we design an algorithm to reuse previous noisy responses if the same query is asked repeatedly. In particular, considering that different requests of the same query may have different privacy requirements, our algorithm can set the optimal reuse fraction of the old noisy response and add new noise to minimize the accumulated privacy cost. Furthermore, we design and implement a blockchain-based system for tracking and saving differential-privacy cost. As a result, the owner of the data set will have full knowledge about how the data set has been used and be confident that no new privacy cost will be incurred for answering queries once the specified privacy budget is exhausted.

  • Conference Article
  • Cite Count Icon 4
  • 10.1109/infocom48880.2022.9796665
Otus: A Gaze Model-based Privacy Control Framework for Eye Tracking Applications
  • May 2, 2022
  • Miao Hu + 4 more

Eye tracking techniques have been widely adopted by a wide range of devices (e.g., AR/VR headsets, smartphones) to enhance user experiences. However, eye gaze data is private in nature, which can reveal users’ psychological and physiological features. Privacy protection techniques can be incorporated to preserve privacy of eye tracking information. Yet, most existing solutions based on Differential Privacy (DP) mechanisms cannot well protect privacy for individual users without sacrificing user experience. In this paper, we are among the first to propose a novel gaze model-based privacy control framework called Otus for eye tracking applications, which incorporates local DP (LDP) mechanisms to preserve user privacy and improves user experience in the meanwhile. First, we conduct a measurement study on real traces to illustrate that direct noise injection on raw gaze trajectories can significantly lower the utility of gaze data. To preserve utility and privacy simultaneously, Otus injects noises in two steps: (1) Extracting model features from raw data to depict gaze trajectories on individual users; (2) Adding LDP noises into model features so as to protect privacy. On one hand, established models can be used to recover user gaze data in order to improve service quality of eye tracking applications. On the other hand, we only need to add LDP noises to distort a small number of model parameters rather than every point on a trajectory to preserve privacy, which has less impact on the utility of gaze data given the same privacy budget. By applying the tile view graph model in step (1), we illustrate the entire workflow of Otus and prove its privacy protection level. For evaluation, we conduct extensive experiments using real gaze traces and the results show that Otus can effectively protect privacy for individual users without significantly compromising gaze data utility.

  • Research Article
  • 10.58982/jdlp.v3i2.533
Analysis Of The 2nd Principle Of Pancasila On The Instagram Privacy Disclaimer
  • Jan 31, 2024
  • Journal of Digital Law and Policy
  • Berlian Dwipasari + 2 more

The article discusses the importance of user privacy on Instagram, a popular social media platform. In the digital era, millions of users share aspects of their personal lives, creating risks of data misuse and security threats. The right to privacy is recognized as a human right, including in the Indonesian Privacy Rights Law. Privacy Controls: Instagram gives users full control, allowing personal account settings. Data Policy: Instagram has a policy that explains the collection, use, and sharing of data, reflecting civilized principles. Data Security: Instagram claims to implement security measures and collaborate with third parties. Privacy Tools and Features: Instagram is constantly developing new privacy tools and features. In the context of Instagram user privacy, the principle of the 2nd Principle, "Just and Civilized Humanity," reflects fair treatment and respect for human rights. The rights to privacy, freedom of expression, data security, and control over personal information are the emphasis in implementing the 2nd Principle. Assessing the use of Instagram's privacy policy by the 2nd Principle and encouraging users to be wise in sharing information. A mixed method approach with descriptive qualitative methods and literature studies is used to understand the impact of Instagram user privacy. Privacy is a human right, and Instagram is expected to implement the principles of the 2nd Principle with policies that support the control, security, and fair treatment of users' data.

More from: Proceedings on Privacy Enhancing Technologies
  • Research Article
  • 10.56553/popets-2025-0120
``Erasing the Echo'': The Usability of Data Deletion in Smart Personal Assistants
  • Oct 1, 2025
  • Proceedings on Privacy Enhancing Technologies
  • Cheng Cheng + 1 more

  • Research Article
  • 10.56553/popets-2025-0119
TEEMS: A Trusted Execution Environment based Metadata-protected Messaging System
  • Oct 1, 2025
  • Proceedings on Privacy Enhancing Technologies
  • Sajin Sasy + 2 more

  • Research Article
  • 10.56553/popets-2025-0147
Okay Google, Where’s My Tracker? Security, Privacy, and Performance Evaluation of Google's Find My Device Network
  • Oct 1, 2025
  • Proceedings on Privacy Enhancing Technologies
  • Leon Böttger + 3 more

  • Research Article
  • 10.56553/popets-2025-0157
Making Web Applications GDPR Compliant: A Comparative Evaluation of GDPR-Enforcement Frameworks
  • Oct 1, 2025
  • Proceedings on Privacy Enhancing Technologies
  • Felix Kalinowski + 3 more

  • Research Article
  • 10.56553/popets-2025-0161
TeleSparse: Practical Privacy-Preserving Verification of Deep Neural Networks
  • Oct 1, 2025
  • Proceedings on Privacy Enhancing Technologies
  • Mohammad M Maheri + 2 more

  • Research Article
  • 10.56553/popets-2025-0146
HyDia: FHE-based Facial Matching with Hybrid Approximations and Diagonalization
  • Oct 1, 2025
  • Proceedings on Privacy Enhancing Technologies
  • Sam Martin + 5 more

  • Research Article
  • 10.56553/popets-2025-0126
Robust and Efficient Watermarking of Large Language Models Using Error Correction Codes
  • Oct 1, 2025
  • Proceedings on Privacy Enhancing Technologies
  • Xiaokun Luan + 3 more

  • Research Article
  • 10.56553/popets-2025-0166
MultiCent: Secure and Scalable Computation of Centrality Measures on Multilayer Graphs
  • Oct 1, 2025
  • Proceedings on Privacy Enhancing Technologies
  • Andreas Brüggemann + 3 more

  • Research Article
  • 10.56553/popets-2025-0149
Sybil-Resistant Parallel Mixing
  • Oct 1, 2025
  • Proceedings on Privacy Enhancing Technologies
  • Maya Kleinstein + 2 more

  • Research Article
  • 10.56553/popets-2025-0131
Aimless Onions: Mixing without Topology Information
  • Oct 1, 2025
  • Proceedings on Privacy Enhancing Technologies
  • Daniel Schadt + 2 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon