Discovery Logo
Sign In
Search
Paper
Search Paper
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Literature Review iconLiterature Review NEW
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Literature Review iconLiterature Review NEW
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link

Related Topics

  • Differential Privacy Mechanism
  • Differential Privacy Mechanism
  • Local Differential Privacy
  • Local Differential Privacy

Articles published on Differential Privacy

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
4260 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.1007/s10207-026-01230-4
Protection of the CGAN against membership inference attack using Differential Privacy
  • Mar 11, 2026
  • International Journal of Information Security
  • Ala Ekramifard + 2 more

Protection of the CGAN against membership inference attack using Differential Privacy

  • New
  • Research Article
  • 10.1145/3800684
Systematic Literature Review on Differential Privacy in Machine Learning
  • Mar 9, 2026
  • ACM Computing Surveys
  • Samsad Jahan + 6 more

With the rapid advancement of Machine Learning (ML) and its widespread applications in various domains, concerns over data privacy and security have become increasingly critical. Differential Privacy (DP) has emerged as a rigorous mathematical framework for privacy-preserving data analysis in ML systems, offering formal guarantees for protecting individual privacy while enabling meaningful learning. Previous surveys have lacked extensive coverage of DP and ML, failing to address the trade-offs between privacy and accuracy. Consequently, achieving a comprehensive understanding of the design, implementation, and efficiency of the DP algorithms within the ML domain is imperative. This survey provides a systematic review of DP methods across ML approaches, including traditional ML, federated learning, and deep learning. Through a thematic analysis of 106 studies, we identify key DP implementation strategies, examine their impact on model performance, and highlight the advantages and limitations of existing approaches. Our findings offer practical insights to assist researchers and practitioners in selecting appropriate DP mechanisms based on specific requirements. Finally, we discuss open challenges and future research directions to advance DP techniques for improved privacy-utility trade-offs in ML applications.

  • New
  • Research Article
  • 10.3390/s26051710
Efficient Data Aggregation in Smart Grids: A Personalized Local Differential Privacy Scheme.
  • Mar 8, 2026
  • Sensors (Basel, Switzerland)
  • Haina Song + 5 more

The rapid advancement of smart grids, while enhancing the efficiency of power systems, has also raised serious concerns regarding the privacy and security of end-users' electricity consumption data. Traditional privacy protection methods struggle to meet users' individualized privacy requirements and often lead to a significant decline in data aggregation accuracy. To address the core contradiction between personalized privacy protection and high-precision grid analytics, this paper proposes an efficient data aggregation scheme based on personalized local differential privacy (EDAS-PLDP) tailored for smart grids. The proposed scheme enables smart terminal users to autonomously select their privacy protection levels based on individual needs, thereby breaking the limitations of the traditional "one-size-fits-all" approach. To mitigate the accuracy loss caused by personalized perturbations, a mean square error-based weighted aggregation strategy is introduced at the gateway side. This strategy evaluates the data quality from groups with different privacy preferences and adjusts aggregation weights to optimize the estimation accuracy of the global mean electricity consumption. Extensive experimental results demonstrate that, compared to existing mainstream schemes, EDAS-PLDP achieves higher estimation accuracy under various distributions of privacy preferences, user scales, and data granularities, while exhibiting lower time consumption, making it suitable for resource-constrained smart grid environments. Furthermore, the scheme shows excellent robustness against false data injection attacks. In summary, EDAS-PLDP provides a balanced and efficient solution for reconciling personalized privacy protection with high-precision data utility in smart grids.

  • New
  • Research Article
  • 10.3390/electronics15051113
DP-JL: Differentially Private Steering via Johnson–Lindenstrauss Projection for Large Language Models
  • Mar 7, 2026
  • Electronics
  • Ziniu Liu + 4 more

Steering large language models (LLMs) toward desired behaviors while preserving privacy is a critical challenge in AI alignment. Existing differentially private (DP) steering methods, such as PSA, add high-dimensional noise that can severely degrade steering accuracy. We propose DP-JL, a novel approach that combines Johnson–Lindenstrauss (JL) random projection with differential privacy to reduce noise while maintaining formal privacy guarantees. DP-JL projects steering vectors into a lower-dimensional space (dimension k) before adding DP noise, reducing total noise magnitude from O(d) to O(k) where k≪d, while the privacy budget ε remains unchanged. We evaluate DP-JL on seven behavioral datasets with LLaMA-2-7B, Mistral-7B, Qwen2.5-7B, and Gemma-2-9B, alongside general capability benchmarks (MMLU, TruthfulQA). All accuracy values are measured on held-out test sets. Results show that DP-JL achieves: (1) up to 22.76 percentage points higher steering accuracy than PSA on the myopic-reward dataset (at fixed privacy budget ε≈0.22, δ=10−5); (2) 91.7% win rate on sycophancy with an average accuracy improvement of 3.01 percentage points; (3) systematic advantages in high-privacy regimes (ε<0.2); and (4) superior capability preservation on related tasks (TruthfulQA), achieving 6.6 percentage points better accuracy than PSA. Furthermore, visualizations and layer-sensitivity analyses reveal that DP-JL faithfully preserves the geometric structure of activation spaces, explaining its robustness. Our findings demonstrate that DP-JL offers superior privacy–utility trade-offs while better preserving model capabilities.

  • New
  • Research Article
  • 10.3390/ijgi15030106
Trajectory Data Publishing Scheme Based on Transformer Decoder and Differential Privacy
  • Mar 3, 2026
  • ISPRS International Journal of Geo-Information
  • Haiyong Wang + 1 more

The proliferation of Location-Based Services (LBSs) has generated vast trajectory datasets that offer immense analytical value but pose critical privacy risks. Achieving an optimal balance between data utility and privacy preservation remains a challenge, a difficulty compounded by the limitations of existing methods in modeling complex, long-term spatiotemporal dependencies. To address this, this paper proposes a trajectory data publishing scheme combining a Transformer decoder with differential privacy. Unlike traditional single-layer approaches, the proposed method establishes a systematic generation–generalization framework. First, a Transformer decoder is integrated into a Generative Adversarial Network (GAN). This architecture mitigates the gradient vanishing issues common in RNN-based models, generating high-fidelity synthetic trajectories that capture long-range correlations while decoupling them from sensitive source data. Second, to provide rigorous privacy guarantees, a clustering-based generalization strategy is implemented, utilizing Exponential and Laplace mechanisms to ensure ϵ-differential privacy. Experiments on the Geolife and Foursquare NYC datasets demonstrate that the scheme significantly outperforms leading baselines, achieving a superior trade-off between privacy protection and data utility.

  • New
  • Research Article
  • 10.3390/s26051592
FedSMOTE-DP: Privacy-Aware Federated Ensemble Learning for Intrusion Detection in IoMT Networks.
  • Mar 3, 2026
  • Sensors (Basel, Switzerland)
  • Theyab Alsolami + 1 more

The Internet of Medical Things (IoMT) transforms healthcare through interconnected medical devices but faces significant cybersecurity threats, particularly intrusion and exfiltration attacks. Centralized intrusion detection systems (IDSs) require data aggregation, presenting privacy and scalability risks. This paper proposes FedEnsemble-DP, a privacy-aware Federated Learning (FL) framework for decentralized intrusion detection in IoMT networks. The framework integrates three data balancing scenarios (Raw Imbalanced, Local SMOTE, Centralized SMOTE) with Differential Privacy (DP) and Secure Aggregation mechanisms. Extensive experiments on WUSTL-EHMS-2020 and CIC-IoMT-2024 datasets under non-IID settings (Dirichlet α = 0.3) demonstrate that models with strong privacy guarantees (ε = 3.0) frequently match or exceed non-private baselines. Key findings show Local SMOTE with ε = 3.0 achieved 94.60% accuracy and 0.9598 AUC, while Raw Imbalanced with ε = 3.0 attained 94.50% accuracy and 0.9494 AUC. Even with strict privacy (ε = 3.0), these results surpassed the non-private baseline (93.20% accuracy) in the raw scenario. Centralized SMOTE showed effectiveness but introduced training instability. These results indicate that local data balancing combined with calibrated DP noise can yield high detection performance while preserving privacy, effectively bridging security-performance and data confidentiality requirements in distributed healthcare networks.

  • New
  • Research Article
  • 10.1016/j.knosys.2026.115346
PCNA-IDS: An integrated lightweight intrusion detection system in internet of vehicles with federated contrastive learning and differential privacy
  • Mar 1, 2026
  • Knowledge-Based Systems
  • Zhiguo Qu + 3 more

PCNA-IDS: An integrated lightweight intrusion detection system in internet of vehicles with federated contrastive learning and differential privacy

  • New
  • Research Article
  • 10.1109/tkde.2026.3652139
Fine-Grained Manipulation Attacks to Local Differential Privacy Protocols for Data Streams
  • Mar 1, 2026
  • IEEE Transactions on Knowledge and Data Engineering
  • Xinyu Li + 5 more

Fine-Grained Manipulation Attacks to Local Differential Privacy Protocols for Data Streams

  • New
  • Research Article
  • 10.1016/j.eswa.2025.129934
Relevance-based adaptive differential private spiking neural networks
  • Mar 1, 2026
  • Expert Systems with Applications
  • Junxiu Liu + 5 more

Relevance-based adaptive differential private spiking neural networks

  • New
  • Research Article
  • 10.38044/2686-9136-2025-6-12
Using personal data in AI model training under EU law
  • Feb 28, 2026
  • Digital Law Journal
  • A A Olifirenko

The adoption of the EU Artificial Intelligence Act (AI Act) established mandatory life-cycle regulation of AI systems in the European Union while preserving the validity of the General Data Protection Regulation (GDPR). The training stage of AI models has consequently become a point of intersection between two regulatory regimes: while the AI Act emphasizes data quality and representativeness along with risk management and documentation of training processes, the GDPR sets out the applicable principles of lawfulness, data minimization, purpose, and storage limitation, as well as providing data subjects with a set of safeguards and remedies. In practical terms, this interaction creates a risk of legally defective model training due to the pursuit of representativeness through excessive data collection and repeated re-use of personal data. This article examines the permissibility and organization of AI model training under the joint application of the AI Act and the GDPR. The research sets out to substantiate a legal model that enables proportionate technical and organizational safeguards while preserving training quality and ensuring the lawfulness of personal data processing that respects the fundamental rights of data subjects. As well as combining doctrinal legal analysis of the AI Act requirements on risk management and data governance with a comparative assessment of the GDPR principles and procedural tools for ensuring lawful processing, the methodology involves a systematization of typical governance artefacts used in the development and deployment of high-risk AI systems. The results are presented as an integrated compliance-by-design model for actors involved in the training stage. A practical distinction between an “AI system” and an “AI model” is substantiated: whereas an AI system is qualified as an organizational and technical envelope comprising the model, infrastructure, input and output interfaces, monitoring, and human interaction, an AI model is treated as the algorithmic core trained on data and used to infer outputs. This distinction can be applied to allocate obligations between the provider and entities deploying or operating the system. The proposed mechanism for reconciling dataset representativeness and accuracy with the GDPR data minimization principle through a documented feature inventory is based on a necessity rationale for each class of data and the exclusion of irrelevant attributes alongside an assessment of indirect discrimination risks. The choice of safeguards (pseudonymization, anonymization, aggregation, synthetic generation, and differential privacy) to data sensitivity, use context, and the level of risk to fundamental rights is carried out on the basis of a proportionality model. This model is supported by the outcomes of a data protection impact assessment and a fundamental rights impact assessment. Finally, a practical legal governance loop for the training life cycle is formulated to cover the determination of the purpose and legal basis, limits on dataset re-use, access control and logging, as well as retention and deletion rules, along with procedures for revisiting training parameters and monitoring after deployment. The proposed model increases legal certainty and provides a reproducible framework for aligning the AI Act and GDPR during the training stage.

  • New
  • Research Article
  • 10.54097/d52m6j10
Federated Learning Approaches for Privacy-Preserving Big Data Analytics
  • Feb 28, 2026
  • Journal of Computing and Electronic Information Management
  • Yanzhi Kou

The blistering growth of big data analytics in industries has transformed how decision-making is done and increased the risk to privacy of centralized machine learning, in which aggregation of sensitive raw data can put information at risk of breaches and inference attacks. The Federated Learning (FL) system provides a decentralized framework that allows performing collaborative model training, but retains data centred on the client devices or institutional servers, thus fulfilling the stringent regulatory standard of regulations like GDPR and HIPAA. The current review is a synthesis of 127 recent papers (20232025) that assess five main privacy-sensible FL methods, namely Standard Federated Averaging (FedAvg), Differential Privacy-enhanced FL (DP-FL, Secure Aggregation, Homomorphic Encryption-based FL (HE-FL) and Hybrid FL frameworks. One of them, DP-FL, is the most widely used method (about 40% of deployments) and it offers a good privacy-utility trade-off with common accuracy degradations of 1-5%. Hybrid designs, particularly those combining differential privacy and secure aggregation, provide defense-in-depth protection, little performance loss (1-4%), and most rapidly increasing deployment rates (34%/year), especially in regulated markets. FL has a high level of practical impact in fundamental areas: healthcare (35% of the applications, e.g., multi-institutional medical imaging and disease prediction), finance (28%, e.g., fraud detection and risk assessment), IoT/smart cities (20%, e.g., traffic optimization and predictive maintenance), and mobile/enterprise systems. The longstanding issues, such as non-IID data heterogeneity, communication overhead, security threats (poisoning and inference attacks), system heterogeneity, and scalability, are addressed with inventions, such as adaptive aggregation, gradient compression, hierarchical architectures, and Byzantine-robust mechanisms. In recent developments to 2026, the focus of development is toward personalized FL, greater adversarial robustness and connection with large language models, making hybrid and personalized FL the best approach to creating secure, scalable, privacy-preserving analytics in the increasingly decentralized world of big data.

  • New
  • Research Article
  • 10.1080/00295639.2025.2594881
Regression Analysis with the Directed Infusion of Data
  • Feb 27, 2026
  • Nuclear Science and Engineering
  • Tyler Lewis + 3 more

Integrating artificial intelligence and machine learning tools into industry necessitates large-scale collaborative efforts that ensure the robust and accurate execution of downstream analytics such as time series prediction, uncertainty quantification, grid optimization, and condition monitoring. However, concerns related to data privacy pervade the nuclear industry due to the proprietary nature of its data and the possibility of data leakage. Legacy techniques such as encryption often require the explicit transmission of data to trustworthy parties, thereby inviting data leakage concerns. The ideal collaboration scenario avoids the explicit dissemination of data/code while maintaining experimental fidelity, which is currently accomplished using various techniques such as trusted execution environments, homomorphic encryption, differential privacy, and multi-matrix masking. These techniques, however, often necessitate a trade-off between trust, efficiency, and utility. This article extends a previously proposed technique called the directed infusion of data (DIOD) that ensures data privacy, allows for scalable obfuscation, and combats the risk of data leakage without compromising utility. The experiments discussed in this article examine a regression-type scenario using DIOD with the goal of preserving the inferential link between two variables. Using the point-kinetics equations, regression experiments compare the performance of a model trained using the original data to that of a model trained using the obfuscated data, which produced identical results. Our claim is further strengthened by an information-theoretic proof and experiment, which showed that the inferential content between variables remains the same after obfuscation, thereby avoiding the required communication of the proprietary data.

  • New
  • Research Article
  • 10.52710/cfs.949
Understanding Memory-related Threats and Vulnerabilities in Large Language Models
  • Feb 27, 2026
  • Computer Fraud and Security
  • Krishna Chaitanya Venigalla

Memory characteristics in large language models (LLMs) represent a transformative progress that enables relevant continuity, privatization, and adaptive learning in interactions. However, these capabilities introduce novel security vulnerabilities that extend beyond traditional concerns. This article examines the security implications of memory-enabled LLMs, categorizing architectural approaches and identifying distinct vulnerability classes, including temporal prompt injection, information persistence, and memory poisoning. Through documented case studies and empirical evidence, the article illustrates how these vulnerabilities manifest in production environments, leading to data leakage, system manipulation, and knowledge corruption. The article proposes comprehensive security frameworks incorporating memory segregation, temporal constraints, bidirectional filtering, differential privacy, and advanced auditing mechanisms. Since LLMS develops from stateless tools to constant assistants, safety paradigms must expand the traditional boundaries to address the entire memory lifestyle and ensure that these systems remain both functional and safe in sensitive operating contexts.

  • New
  • Research Article
  • 10.1038/s41598-026-36432-2
Privacy-aware deep vein thrombosis segmentation using a multi-model federated learning framework with the federated averaging algorithm.
  • Feb 27, 2026
  • Scientific reports
  • Pavihaa Lakshmi B + 1 more

Deep Vein Thrombosis (DVT) is the formation of blood clots in the deep veins of the calf, requiring precise Computer Tomography (CT) scan segmentation for accurate diagnosis and treatment. We proposed and developed an efficient Federated Learning (FedL) architecture using the Federated Averaging (FedAvg) algorithm. Seven distinct local models were designed and trained on non-independent and identically distributed (Non-IID) CT images to maintain data privacy and security, enhancing DVT segmentation efficiency and accuracy. The global model was progressively improved by aggregating the local model's weights using FedAvg algorithm. Our algorithm was evaluated in three phases using datasets of 1000, 2000, and 3000 samples to assess the global model's performance. Phase 1 involved three clients, each with unique local models (Convolutional Neural Network (CNN), Sequential, and Semantic). While, Phase 2 expanded to five clients, incorporating additional models (U-Net and VGG Net-19). In Phase 3, scaled to seven clients with advanced models (Modified U-Net and Modified-Net). Empirical results across Phases 1-3 showed significant gains with increasing dataset size -attaining higher Accuracy ([Formula: see text]) and F1-score ([Formula: see text]), while Tversky Loss decreased to ([Formula: see text]). Notably, our framework proved consistent improvement across all phases, achieving a reduction in validation loss from 0.910 to 0.061 and a communication cost increase from 14 MB to 3279 MB with increasing model scales. The average training time rose proportionally (7.67 s → 18,702 s) while maintaining robust differential privacy preservation (ε [Formula: see text]) and improved client heterogeneity ([Formula: see text]), demonstrating our framework's scalability and stability across heterogeneous environments.

  • New
  • Research Article
  • 10.1142/s2301385027500737
Bipartite Consensus-Based Heterogeneous Vehicle Platoon Control Considering Differentially Private with Aperiodic Sampled-data Interactions
  • Feb 25, 2026
  • Unmanned Systems
  • Lingyu Wang + 3 more

This paper investigates the problem of achieving differential privacy protection while implementing aperiodic sampled-data averaging output bipartite consensus control in continuoustime heterogeneous connected vehicle platoons. The platoon incorporates both collaboration and competition among vehicles. Firstly, a feedback linearization tool is applied to transform the nonlinear vehicle dynamics into a linear heterogeneous state space model. Then, a two-tier distributed control algorithm is proposed to design the hybrid distributed bipartite consensus controller, in which vehicles traveling in the same or opposite directions interact at discrete time instants. To ensure differential privacy, time-varying variance Laplace noise is introduced to protect the sensitive information of each vehicle. Next, the time-varying step size and noise parameters are determined such that the platoon reach bipartite consensus on an infinite time domain that satisfies the desired convergence accuracy and predefined upper bound on privacy. Finally, the controller is solved using the Riccati equation, thus enabling the preservation of individual vehicle data privacy while preserving the platoon bipartite consensus. Two simulation examples demonstrate the validity of the theoretical results.

  • New
  • Research Article
  • 10.70917/ijcisim-2026-0365
Privacy Protection Mechanism and Residential Security Enhancement Countermeasures for Smart Homes Combined with Internet of Things Technology
  • Feb 21, 2026
  • International Journal of Computer Information Systems and Industrial Management Applications
  • He Jiang + 1 more

As an important application scenario of the Internet of Things (IoT) technology, smart homes realize the interconnection and intelligent management of home devices through sensors, RFID chips and other devices. However, smart home devices face serious privacy leakage risks during data collection, transmission and processing, and sensitive data such as user behavioral data and home environment information are easily acquired and exploited by malicious attackers. In this study, a smart home privacy protection mechanism based on federated learning and differential privacy is proposed for the privacy protection of IoT devices in smart homes. The methodology adopts an adaptive hierarchical differential privacy adding noise algorithm to quantify the layer contribution by calculating the non-zero percentage of activation values and the amount of gradient change in each layer to realize the dynamic privacy budget allocation. Meanwhile, the wireless federated learning system model is established to characterize the channel properties between the base station and the user equipment using the block fading model, and combines the Gaussian and Laplace mechanisms to provide differential privacy protection. The experimental results show that on the MNIST dataset, the performance of this paper's algorithm reaches 95.16%, which is 3.56% higher than the competitive algorithm AUTO-S when the privacy budget takes the value of 10. In the smart home device recognition task, the method achieves an average adversarial rate of 98.625% in the white-box scenario and 89.39% in the black-box scenario. The conclusion shows that the privacy-preserving mechanism can effectively protect user privacy while ensuring the usability of the model, which provides reliable security for the smart home system and has good practicality and popularization value.

  • New
  • Research Article
  • 10.1080/10589759.2026.2627627
Federated learning framework for privacy-preserving defect recognition across distributed additive manufacturing networks
  • Feb 18, 2026
  • Nondestructive Testing and Evaluation
  • Gerard Deepak + 5 more

ABSTRACT This paper presents a privacy-preserving federated learning framework for distributed additive manufacturing (AM) defect recognition, enabling joint training across multiple facilities without exchanging raw production data. The framework combines personalized federated learning with differential privacy to maintain high detection performance comparable to centralized training. A U-Net architecture with global encoders and site-dependent local decoders is used for semantic segmentation, adapting to varying data distributions across sites. To address class imbalance in defect detection, a multi-loss optimization approach is proposed, combining focal loss, dice loss, and weighted cross-entropy. Differential privacy is applied through gradient perturbation with adaptive noise calibration, ensuring privacy guarantees with ε = 1.0. Experimental results from eight manufacturing sites, with 45,680-layer images, show an accuracy of 91.8% and an F1-score of 0.889, with only a 0.6% drop compared to centralized training. Sites with limited local data experience 21–24 percentage points improvement over isolated training and a 98.7% reduction in communication overhead. Practical implementation yields 89.2% defect recall and 86.1% accuracy, demonstrating the effectiveness of the framework for industrial quality control.

  • Research Article
  • 10.3390/s26041275
Federated Learning in Edge Computing: Vulnerabilities, Attacks, and Defenses-A Survey.
  • Feb 15, 2026
  • Sensors (Basel, Switzerland)
  • Sahar Alhawas + 1 more

Federated Learning (FL), a distributed machine learning framework, enables collaborative model training across multiple devices without sharing raw data, thereby preserving privacy and reducing communication costs. When combined with Edge Computing (EC), FL brings computations closer to data sources, enabling low-latency, real-time decision-making in resource-constrained environments. However, this decentralization introduces several vulnerabilities, including data poisoning, backdoor attacks, inference leaks, and Byzantine behaviors, which are worsened by the heterogeneity of edge devices and their intermittent connectivity. This survey presents a comprehensive review of the intersection of FL and EC, focusing on vulnerabilities, attack vectors, and defense mechanisms. We analyze existing methods for robust aggregation, anomaly detection, differential privacy, and secure aggregation, with a focus on their feasibility within edge environments. Additionally, we identify open research challenges, such as scalability, resilience to heterogeneity, and energy-efficient defenses, and provide insights into the evolving landscape of FL in edge computing. This review aims to inform future research on enhancing the security, privacy, and efficiency of FL systems deployed in real-world edge environments.

  • Research Article
  • 10.1038/s41598-026-39837-1
Federated microservices architecture with blockchain for privacy-preserving and scalable healthcare analytics.
  • Feb 14, 2026
  • Scientific reports
  • Murikipudi Harshith + 6 more

Nowadays, the digitalisation of healthcare has, in turn, generated outstanding volumes of heterogeneous data from EHRs, IoMT devices, and telemedicine platforms, requiring secure and scalable analytical frameworks. Existing monolithic systems now face issues related to scalability, interoperability, and compliance while also putting patient privacy at risk. Our study describes a new federated microservices architecture that integrates Kubernetes-orchestrated microservices, TensorFlow Federated learning, and Hyperledger Fabric blockchain to enable privacy-preserving, scalable, and auditable analytics in healthcare. In contrast to prior works focusing on isolated solutions, our framework presents an end-to-end deployable system with modular scalability, differential privacy, and immutable auditability. We have evaluated the framework on 100,000 synthetic Synthea records and a real-world dataset of 20,000 diabetes patients. The framework achieved 95.2% predictive accuracy, 42% lower latency, and 10 × faster recovery than the monolithic baselines while ensuring zero breach success in adversarial simulations. These results demonstrate that the proposed architecture not only improves clinical decision support accuracy but also provides operational resilience, regulatory compliance, and cost efficiency. This work lays the foundation for next-generation intelligent healthcare systems, with future extensions toward multimodal data and explainable AI to enhance trust and adoption in clinical practice.

  • Research Article
  • 10.4018/ijisp.401370
Personalized Local Differential Privacy Frequency Estimation Mechanisms Based on Partitioning the Domain of Real Attribute Values
  • Feb 13, 2026
  • International Journal of Information Security and Privacy
  • Yunfei Li + 5 more

Existing multi-domain personalized local differential privacy (MDPLDP) mechanisms, which extend attribute domains by introducing fake values, often fail to provide adequate personalized privacy protection and limit utility in frequency estimation. To address these limitations, the authors propose two novel MDPLDP mechanisms that construct multiple domains by partitioning real attribute values, support cross-domain aggregation, and flexibly accommodate diverse privacy requirements and budgets. The methods further extend to multi-dimensional frequency estimation, catering to complex user privacy preferences. Theoretical analysis and experimental results demonstrate that our mechanisms achieve substantially lower estimation error and communication overhead, while delivering over 20% average utility improvement compared to state-of-the-art methods in both single- and multi-dimensional settings.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2026 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers