Discovery Logo
Sign In
Search
Paper
Search Paper
Pricing Sign In
  • Home iconHome
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Literature Review iconLiterature Review NEW
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link
Discovery Logo menuClose menu
  • Home iconHome
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Literature Review iconLiterature Review NEW
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link

Related Topics

  • Differential Privacy Mechanism
  • Differential Privacy Mechanism
  • Local Differential Privacy
  • Local Differential Privacy

Articles published on Differential privacy

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
4334 Search results
Sort by
Recency
  • New
  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.neunet.2025.108345
Adaptive differential privacy mechanism for enhanced deep learning model utility and privacy.
  • Apr 1, 2026
  • Neural networks : the official journal of the International Neural Network Society
  • Zhang Xiangfei + 1 more

Adaptive differential privacy mechanism for enhanced deep learning model utility and privacy.

  • New
  • Research Article
  • 10.1016/j.ins.2025.122981
DP-FedPUAC: Federated learning with differential privacy via adaptive gradient clipping and local iteration optimization
  • Apr 1, 2026
  • Information Sciences
  • Jiangyong Yuan + 5 more

DP-FedPUAC: Federated learning with differential privacy via adaptive gradient clipping and local iteration optimization

  • New
  • Research Article
  • 10.1016/j.knosys.2026.115478
A semantics-maintained differential privacy protection for high-utility text
  • Apr 1, 2026
  • Knowledge-Based Systems
  • Zhouting Wu + 2 more

A semantics-maintained differential privacy protection for high-utility text

  • New
  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.neunet.2025.108380
FedPCL-CDR: A federated prototype-based contrastive learning framework for privacy-preserving cross-domain recommendation.
  • Apr 1, 2026
  • Neural networks : the official journal of the International Neural Network Society
  • Li Wang + 2 more

FedPCL-CDR: A federated prototype-based contrastive learning framework for privacy-preserving cross-domain recommendation.

  • New
  • Research Article
  • 10.1016/j.ins.2025.122979
CA-LDP: Community-aware local differential privacy for dynamic social networks
  • Apr 1, 2026
  • Information Sciences
  • Yuanjing Hao + 4 more

CA-LDP: Community-aware local differential privacy for dynamic social networks

  • New
  • Research Article
  • 10.1016/j.neunet.2025.108448
Towards unified frameworks for fair and privacy-preserving graph neural networks.
  • Apr 1, 2026
  • Neural networks : the official journal of the International Neural Network Society
  • Xuemin Wang + 6 more

Towards unified frameworks for fair and privacy-preserving graph neural networks.

  • New
  • Research Article
  • 10.1016/j.eswa.2025.130977
FedCC: Federated cluster-aware contrastive learning with adaptive differential privacy under non-IID settings
  • Apr 1, 2026
  • Expert Systems with Applications
  • Ruilong Yuan + 4 more

FedCC: Federated cluster-aware contrastive learning with adaptive differential privacy under non-IID settings

  • New
  • Research Article
  • Cite Count Icon 1
  • 10.26599/tst.2024.9010179
Enhancing Medical Assistance Through Secure Federated Edge Data Augmentation with Local Differential Privacy in Non-IID Scenarios
  • Apr 1, 2026
  • Tsinghua Science and Technology
  • Shuai Li + 4 more

We introduce Federated Medical Data Augmentation with Differential Privacy for Medical Assistance (FMDADP-MA), addressing the challenge of limited medical data sharing due to privacy regulations and data isolations. Unlike traditional Generative adversarial networks assuming Independent and Identically Distributed (IID) data, FMDADP-MA facilitates data augmentation in non-IID environments using federated learning. This framework enables medical institutions collaboration across different locations to enrich datasets without centralizing data, overcoming collection and computational constraints. By organizing edge nodes and selecting groups for global training, we minimize data transmission to a central server. Each local model uses two convolutional neural networks to generate and label data, incorporating local differential privacy to safeguard against gradient-based privacy breaches. Our experiments show that increasing participant institutions enhances the global model’s accuracy, boosts local model performance, and diversifies data generation, tackling real-world issues of medical data privacy, imbalance, and under-labeling.

  • New
  • Research Article
  • 10.22266/ijies2026.0331.60
Frequency-aware Differential Privacy: A Wavelet-driven Gradient Perturbation Framework
  • Mar 31, 2026
  • International Journal of Intelligent Engineering and Systems

Frequency-aware Differential Privacy: A Wavelet-driven Gradient Perturbation Framework

  • Research Article
  • 10.1007/s10207-026-01230-4
Protection of the CGAN against membership inference attack using Differential Privacy
  • Mar 11, 2026
  • International Journal of Information Security
  • Ala Ekramifard + 2 more

Protection of the CGAN against membership inference attack using Differential Privacy

  • Research Article
  • 10.1145/3800684
Systematic Literature Review on Differential Privacy in Machine Learning
  • Mar 9, 2026
  • ACM Computing Surveys
  • Samsad Jahan + 6 more

With the rapid advancement of Machine Learning (ML) and its widespread applications in various domains, concerns over data privacy and security have become increasingly critical. Differential Privacy (DP) has emerged as a rigorous mathematical framework for privacy-preserving data analysis in ML systems, offering formal guarantees for protecting individual privacy while enabling meaningful learning. Previous surveys have lacked extensive coverage of DP and ML, failing to address the trade-offs between privacy and accuracy. Consequently, achieving a comprehensive understanding of the design, implementation, and efficiency of the DP algorithms within the ML domain is imperative. This survey provides a systematic review of DP methods across ML approaches, including traditional ML, federated learning, and deep learning. Through a thematic analysis of 106 studies, we identify key DP implementation strategies, examine their impact on model performance, and highlight the advantages and limitations of existing approaches. Our findings offer practical insights to assist researchers and practitioners in selecting appropriate DP mechanisms based on specific requirements. Finally, we discuss open challenges and future research directions to advance DP techniques for improved privacy-utility trade-offs in ML applications.

  • Research Article
  • 10.3390/s26051710
Efficient Data Aggregation in Smart Grids: A Personalized Local Differential Privacy Scheme.
  • Mar 8, 2026
  • Sensors (Basel, Switzerland)
  • Haina Song + 5 more

The rapid advancement of smart grids, while enhancing the efficiency of power systems, has also raised serious concerns regarding the privacy and security of end-users' electricity consumption data. Traditional privacy protection methods struggle to meet users' individualized privacy requirements and often lead to a significant decline in data aggregation accuracy. To address the core contradiction between personalized privacy protection and high-precision grid analytics, this paper proposes an efficient data aggregation scheme based on personalized local differential privacy (EDAS-PLDP) tailored for smart grids. The proposed scheme enables smart terminal users to autonomously select their privacy protection levels based on individual needs, thereby breaking the limitations of the traditional "one-size-fits-all" approach. To mitigate the accuracy loss caused by personalized perturbations, a mean square error-based weighted aggregation strategy is introduced at the gateway side. This strategy evaluates the data quality from groups with different privacy preferences and adjusts aggregation weights to optimize the estimation accuracy of the global mean electricity consumption. Extensive experimental results demonstrate that, compared to existing mainstream schemes, EDAS-PLDP achieves higher estimation accuracy under various distributions of privacy preferences, user scales, and data granularities, while exhibiting lower time consumption, making it suitable for resource-constrained smart grid environments. Furthermore, the scheme shows excellent robustness against false data injection attacks. In summary, EDAS-PLDP provides a balanced and efficient solution for reconciling personalized privacy protection with high-precision data utility in smart grids.

  • Research Article
  • 10.3390/electronics15051113
DP-JL: Differentially Private Steering via Johnson–Lindenstrauss Projection for Large Language Models
  • Mar 7, 2026
  • Electronics
  • Ziniu Liu + 4 more

Steering large language models (LLMs) toward desired behaviors while preserving privacy is a critical challenge in AI alignment. Existing differentially private (DP) steering methods, such as PSA, add high-dimensional noise that can severely degrade steering accuracy. We propose DP-JL, a novel approach that combines Johnson–Lindenstrauss (JL) random projection with differential privacy to reduce noise while maintaining formal privacy guarantees. DP-JL projects steering vectors into a lower-dimensional space (dimension k) before adding DP noise, reducing total noise magnitude from O(d) to O(k) where k≪d, while the privacy budget ε remains unchanged. We evaluate DP-JL on seven behavioral datasets with LLaMA-2-7B, Mistral-7B, Qwen2.5-7B, and Gemma-2-9B, alongside general capability benchmarks (MMLU, TruthfulQA). All accuracy values are measured on held-out test sets. Results show that DP-JL achieves: (1) up to 22.76 percentage points higher steering accuracy than PSA on the myopic-reward dataset (at fixed privacy budget ε≈0.22, δ=10−5); (2) 91.7% win rate on sycophancy with an average accuracy improvement of 3.01 percentage points; (3) systematic advantages in high-privacy regimes (ε<0.2); and (4) superior capability preservation on related tasks (TruthfulQA), achieving 6.6 percentage points better accuracy than PSA. Furthermore, visualizations and layer-sensitivity analyses reveal that DP-JL faithfully preserves the geometric structure of activation spaces, explaining its robustness. Our findings demonstrate that DP-JL offers superior privacy–utility trade-offs while better preserving model capabilities.

  • Research Article
  • 10.3390/ijgi15030106
Trajectory Data Publishing Scheme Based on Transformer Decoder and Differential Privacy
  • Mar 3, 2026
  • ISPRS International Journal of Geo-Information
  • Haiyong Wang + 1 more

The proliferation of Location-Based Services (LBSs) has generated vast trajectory datasets that offer immense analytical value but pose critical privacy risks. Achieving an optimal balance between data utility and privacy preservation remains a challenge, a difficulty compounded by the limitations of existing methods in modeling complex, long-term spatiotemporal dependencies. To address this, this paper proposes a trajectory data publishing scheme combining a Transformer decoder with differential privacy. Unlike traditional single-layer approaches, the proposed method establishes a systematic generation–generalization framework. First, a Transformer decoder is integrated into a Generative Adversarial Network (GAN). This architecture mitigates the gradient vanishing issues common in RNN-based models, generating high-fidelity synthetic trajectories that capture long-range correlations while decoupling them from sensitive source data. Second, to provide rigorous privacy guarantees, a clustering-based generalization strategy is implemented, utilizing Exponential and Laplace mechanisms to ensure ϵ-differential privacy. Experiments on the Geolife and Foursquare NYC datasets demonstrate that the scheme significantly outperforms leading baselines, achieving a superior trade-off between privacy protection and data utility.

  • Research Article
  • 10.3390/s26051592
FedSMOTE-DP: Privacy-Aware Federated Ensemble Learning for Intrusion Detection in IoMT Networks.
  • Mar 3, 2026
  • Sensors (Basel, Switzerland)
  • Theyab Alsolami + 1 more

The Internet of Medical Things (IoMT) transforms healthcare through interconnected medical devices but faces significant cybersecurity threats, particularly intrusion and exfiltration attacks. Centralized intrusion detection systems (IDSs) require data aggregation, presenting privacy and scalability risks. This paper proposes FedEnsemble-DP, a privacy-aware Federated Learning (FL) framework for decentralized intrusion detection in IoMT networks. The framework integrates three data balancing scenarios (Raw Imbalanced, Local SMOTE, Centralized SMOTE) with Differential Privacy (DP) and Secure Aggregation mechanisms. Extensive experiments on WUSTL-EHMS-2020 and CIC-IoMT-2024 datasets under non-IID settings (Dirichlet α = 0.3) demonstrate that models with strong privacy guarantees (ε = 3.0) frequently match or exceed non-private baselines. Key findings show Local SMOTE with ε = 3.0 achieved 94.60% accuracy and 0.9598 AUC, while Raw Imbalanced with ε = 3.0 attained 94.50% accuracy and 0.9494 AUC. Even with strict privacy (ε = 3.0), these results surpassed the non-private baseline (93.20% accuracy) in the raw scenario. Centralized SMOTE showed effectiveness but introduced training instability. These results indicate that local data balancing combined with calibrated DP noise can yield high detection performance while preserving privacy, effectively bridging security-performance and data confidentiality requirements in distributed healthcare networks.

  • Research Article
  • 10.3390/data11030049
Adaptive Neural Network Method for Detecting Crimes in the Digital Environment to Ensure Human Rights and Support Forensic Investigations
  • Mar 2, 2026
  • Data
  • Serhii Vladov + 8 more

This article presents an adaptive neural network method for the automated detection, reconstruction, and prioritisation of multi-stage criminal operations in the digital environment, aiming to protect human rights and ensure the legal security of digital evidence. The developed method combines multimodal temporal encoders, a graph module based on GNN for entity correlation, and a correlation head with a link-prediction mechanism and differentiable path recovery. Sliding time windows, logarithmic transformation of volumetric features, and pseudonymization of identifiers with the ability to utilise privacy-preserving procedures (federated learning, differential privacy) are used for data aggregation and normalisation. Unique features of the developed method include an integrated risk function combining an anomaly component and graph significance, a module for automated forensic packet generation with chain of custody recording, and a mechanism for incremental model updates. Experimental results demonstrate high diagnostic metric values (AUC ≈ 0.97, F1 ≈ 0.99 on the test dataset after balancing), robust recovery of priority paths (“path_probability” > 0.7 for top operations), and pipeline performance in PII leak prioritisation and human trafficking reconstruction scenarios. The study’s contribution lies in a practice-oriented neural network method that integrates detection, correlation, and the collection of legally applicable evidence.

  • Research Article
  • 10.1063/5.0315435
Hybrid quantum convolutional neural network with differential privacy for image classification
  • Mar 1, 2026
  • AIP Advances
  • Bai Liu + 2 more

Quantum convolutional neural networks hold potential advantages for image recognition by exploiting unique quantum properties. However, their training processes remain susceptible to privacy leakage. To address this issue, we propose a hybrid quantum convolutional neural network (HQCNN) model with differential privacy. This architecture utilizes quantum superposition and entanglement to efficiently extract image features. To ensure differential privacy, Gaussian noise is added to the parameter gradients. Crucially, the superior feature extraction and learning capabilities of the HQCNN are utilized to mitigate the performance degradation typically induced by noise injection. Experiments on the MNIST and Fashion-MNIST datasets demonstrate that the proposed model achieves a test accuracy exceeding 95% under a strict privacy budget of ɛ < 1.04. Furthermore, evaluations on the CIFAR-10 dataset confirm the feasibility of the model in the differentially private scenario. Comparative analyses further validate that the proposed model preserves privacy while maintaining superior performance in differentially private training.

  • Research Article
  • 10.1016/j.xops.2025.101030
Federated Learning for Multi-Disease Ophthalmic Diagnostics Using OCT Angiography.
  • Mar 1, 2026
  • Ophthalmology science
  • Ahammed Sakir Nabil + 4 more

Federated Learning for Multi-Disease Ophthalmic Diagnostics Using OCT Angiography.

  • Research Article
  • 10.1016/j.knosys.2026.115346
PCNA-IDS: An integrated lightweight intrusion detection system in internet of vehicles with federated contrastive learning and differential privacy
  • Mar 1, 2026
  • Knowledge-Based Systems
  • Zhiguo Qu + 3 more

PCNA-IDS: An integrated lightweight intrusion detection system in internet of vehicles with federated contrastive learning and differential privacy

  • Research Article
  • 10.3390/math14050836
ADAT: Adaptive Dynamic Anonymity and Traceability via Privacy-Aware Random Forest and Truncated Local Differential Privacy in a Trusted Execution Environment (TEE)
  • Mar 1, 2026
  • Mathematics
  • Yun He + 2 more

In current mobile networks, users’ identity privacy is threatened by long-term observation attacks. To resist such attacks, identity-anonymity technology has been proposed. However, existing anonymity schemes cannot adapt to diverse, dynamic business scenarios because of their rigid anonymity strategies. This leads to wasted computing and communication resources in low-risk scenarios or privacy leaks in high-risk scenarios. To address this problem, we propose an Adaptive Dynamic Anonymity and Traceability scheme based on privacy-aware random forest and local differential privacy in a Trusted Execution Environment. We first construct a convex optimization model to seek the optimal balance between privacy risk and performance cost. Subsequently, we train a privacy-aware random forest model to intelligently predict the optimal Time-To-Live of the anonymous identifier based on the real-time context. Lastly, to resist long-term observation attacks, our scheme uses a lightweight symmetric encryption algorithm to generate pseudo-random, anonymous identifiers and applies truncated local differential privacy to ensure the indistinguishability of the timing patterns of anonymous identifier updates. We formally prove that our scheme can resist long-term observation attacks. Experimental results show that, compared with fixed Time-To-Live schemes, our scheme significantly reduces the comprehensive cost while maintaining the same level of security. Furthermore, compared with traditional public-key schemes, it greatly improves the generation speed of anonymous identifiers and reduces communication costs.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2026 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers