Articles published on Data Security
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
29995 Search results
Sort by Recency
- New
- Research Article
- 10.1186/s12910-026-01404-8
- Feb 7, 2026
- BMC medical ethics
- Betelhem Zewdu Wubineh + 2 more
Artificial intelligence (AI) is profoundly transforming the healthcare landscape, presenting unprecedented opportunities to enhance patient care and clinical outcomes. However, the rapid integration of AI technologies has raised significant ethical concerns, requiring rigorous scrutiny to ensure their responsible and equitable use. This study aimed to explore the ethical considerations and strategies related to the implementation of AI in healthcare through a systematic review. A systematic search identified 243 publications published between 2019 and 2025 that were initially identified. After applying inclusion and exclusion criteria, 22 papers were selected for final synthesis to assess ethical concerns and strategies related to AI in healthcare. The analysis identified key ethical concerns, categorizing them into six distinct groups: (1) Transparency and Trust, (2) Bias and Fairness, (3) Privacy and Data Security, (4) Accountability and Responsibility, (5) Ethical and Moral, (6) Regulatory and Legal. Additionally, several ethical strategies were identified in the implementation of AI systems, including adherence to ethical principles, standards, and frameworks; transparency and bias mitigation; monitoring and auditing of AI systems; and stakeholder involvement and governance in decision-making processes. This review emphasizes the importance of addressing these ethical concerns to ensure the successful implementation of AI technologies in healthcare. The findings provide valuable insights and recommendations for stakeholders, including developers, healthcare professionals, and policymakers, to guide the ethical deployment of AI decision support systems in healthcare.
- New
- Research Article
- 10.1080/14796694.2026.2621128
- Feb 7, 2026
- Future oncology (London, England)
- Zoe Fehlberg + 9 more
Familial cancer test referral rates for rare tumors are suboptimal and follow a social gradient; while cancer registries are legally mandated to collect comprehensive clinical pathological data which could be used to inform clinical practice. We aimed to investigate consumer acceptability of and preferred approach for a cancer registry-driven familial cancer testing notification pathway. A qualitative study using semi-structured interviews informed by the Theoretical Framework of Acceptability was conducted. Nineteen individuals recently disclosed to the Victorian Cancer Registry diagnosed with a cancer meeting local familial cancer testing criteria were interviewed. Participants supported being notified directly by the cancer registry to inform them about familial cancer testing, as they welcomed using existing health data in new ways to optimize health care. Key considerations included the timing, tone, language, information provided in the registry communication, and minimizing the onus on the patient. Assuring data security and verifying the legitimacy of the registry were raised. Individuals diagnosed with cancer found the service model acceptable. Participants preferred either to action the findings independently, with supporting resources, or permit the cancer registry to directly inform treating clinicians. Ongoing and consumer-informed work is required to develop processes and resources including digital options.
- New
- Research Article
- 10.3390/su18031699
- Feb 6, 2026
- Sustainability
- Dominika Kansy + 1 more
This article scientifically addresses the challenges related to data security and stakeholder privacy faced by companies operating in the European Union. These challenges stem largely from the global digital transformation, within which the European Union imposes regulations governing data protection and stakeholder privacy. The digital transformation in the European Union focuses on the integration of people and technology, sustainable development, and the resilience of management systems, which are the pillars of Industry 5.0. From a practical perspective, the paper examines the current level of awareness among employees of the enterprise in Poland regarding data and privacy risk management in today’s economic environment. The paper presents both a theoretical review and, in the empirical section, the results of primary research. The study was conducted in Poland on a sample of 556 enterprises from various economic sectors. The paper begins with Introduction. Background presents a literature review conducted on the conditions for enterprise functioning in the evolving paradigm of Industry 5.0, as well as the fundamental legal requirements regarding data security and stakeholder privacy across business activities. Materials and Methods presents the research methods employed to assess how respondents perceive threats to data security and stakeholder privacy. Results summarizes the research findings. In Discussion, both practical business implications are addressed, and the role of technology and organizational procedures in responsible data and privacy management is highlighted. Furthermore, the importance of creating ethical cyber–physical environments as an element of sustainable enterprise transformation is emphasized. Finally, Conclusions presents the results and key findings regarding the level of awareness among employees of Polish enterprises about data security and stakeholder privacy in the context of digital transformation.
- New
- Research Article
- 10.36950/2026.2ciss058
- Feb 6, 2026
- Current Issues in Sport Science (CISS)
- Michelle C Haas + 4 more
Introduction & Purpose: Open research data offer large potential for reuse, but also present complex challenges such as ownership, confidentiality, and data misuse which is especially important in human movement analysis. Although numerous datasets exist (Olugbade et al., 2023), insufficient metadata and poor compliance with the FAIR-principles (findable, accessible, interoperable, reusable) (Wilkinson et al., 2016) hinder effective data reuse. Hence, we aimed to develop guidelines for publishing data from human movement laboratories that can be adapted to similar contexts to promote FAIR data sharing. Methods: The guidelines were developed and refined in an iterative approach involving numerous movement laboratories. Initially, a survey was conducted among Swiss movement laboratories to assess current practices in open data sharing. A workshop was then held to refine key elements of the guidelines. Based on these inputs, a draft of the guidelines was developed, refined multiple times, validated, and published (Haas et al., 2024). Results: Choosing an appropriate license and repository is an essential step in data sharing as licenses are irrevocable and determine data use. Researchers should consider requirements from funders, institutions, and repositories. The FAIR principles and practical considerations such as file size limits, license terms, and data security can guide decisions. To enable other researchers to comprehend and make use of the deposited dataset, providing metadata is key. Metadata includes general information (e.g. data format, license) and specific information about the dataset, variables, analysis procedures, and used hard- and software. Discussion: Data itself should at least include basic statistical measures (minimum, maximum, mean, standard deviation) of all reported outcomes. Ideally, individual participant data should be made publicly available. However, this is only possible for anonymized data and encoded data when informed consent for reuse is obtained. In general, anonymizing data is recommended to support data sharing while ensuring compliance with legal and ethical standards. When reusing data, researchers must verify its quality through thorough checks. Conclusion: Data sharing should be considered early during project planning to ensure all necessary data is collected and ethical and legal requirements for publishing data are met. To remain effective and compliant, these guidelines need to be regularly updated and reviewed by the community.
- New
- Research Article
- 10.3390/app16031650
- Feb 6, 2026
- Applied Sciences
- Agustina Buccella + 3 more
The process of building data analytics systems, including big data systems, is currently being investigated from various perspectives that generally focus on specific aspects, such as data security or privacy, to the detriment of an engineering perspective on systems development. To address this limitation, our proposal focuses on developing analytics systems through a reuse-based approach, including stages ranging from problem definition to results analysis by identifying variations and building reusable, context-based assets. This study presents the reuse process by constructing two case studies that address the water table level prediction problem in two different contexts: the irrigated period and the non-irrigated period in the same study area. The objective of this study is to demonstrate the influence of context on the performance of widely used predictive models for this problem, including long short-term memory (LSTM), artificial neural networks (ANNs), and support vector machines (SVMs), as well as the potential for reusing the developed analytics system. Additionally, we applied the permutation feature importance (PFI) to determine the contribution of individual variables to the prediction. The results confirm that the same problem hypotheses yield different performance in each case in terms of coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), and mean square error (MSE). They also show that the best-performing predictive models differ for some of the hypotheses (ANN in one case and LSTM in another), supporting the assumption that context can influence model selection and performance. Reusing assets allows for more efficient evaluation of these alternatives during development time, resulting in analytics systems that are more closely aligned with reality, while also offering the advantages of software system composition.
- New
- Research Article
- 10.3390/e28020185
- Feb 6, 2026
- Entropy
- Yicheng Yu + 3 more
Wireless sensor networks (WSNs) are extensively used in IoT applications. Secure access control and data protection are essential. Nonetheless, the wireless environment has an open nature. The limited resources of sensor devices render WSNs susceptible to a variety of security attacks, causing significant difficulties in the design phase of efficient authentication and key agreement (AKA) protocols. This study proposes a physically unclonable function (PUF)-based lightweight and secure AKA protocol for WSNs based on elliptic curve cryptography (ECC). A secure password update scheme is offered, which would allow legitimate users to reset forgotten passwords without re-registration. According to formal security analysis using BAN logic and ProVerif, the proposed protocol is secure against common attacks. Moreover, from an entropy perspective, the use of dynamic pseudonyms and fresh session randomness increase an adversary’s uncertainty about user identities, thereby limiting identity-related information leakage. Performance evaluation shows that the proposed protocol achieves lower computational and communication overhead than the existing ones, making it suitable for WSNs with resource constraints.
- New
- Research Article
- 10.3390/electronics15030708
- Feb 6, 2026
- Electronics
- Yu-Heng Hsieh + 3 more
Modern passport systems face significant challenges in secure data sharing, real-time verification, and user-controlled authorization, particularly in cross-border scenarios. Existing digital passport solutions, often built on permissioned blockchains, suffer from limited transparency, scalability, and high operational costs. This paper proposes a decentralized passport management system based on an Ethereum Layer 2 architecture that combines global governance with high-throughput and cost-efficient passport operations. The system adopts a hybrid design in which a Global Passport Registry smart contract is deployed on the Ethereum mainnet for cross-country coordination, while passport issuance, access control, and identity management are handled on Layer 2 networks through country-operated Passport Managers and user-specific Personal Passport smart contracts. Extensive performance evaluations show that Ethereum Layer 1 throughput saturates at approximately 40–50 transactions per second (TPS), whereas the proposed Layer 2 deployment consistently exceeds 150 TPS and reaches up to 300 TPS under higher-performance environments, significantly surpassing the estimated system requirement of 70 TPS. These improvements result in faster response times, reduced congestion, and substantially lower transaction costs, demonstrating that public Ethereum Layer 2 infrastructures can effectively support a scalable, self-sovereign, privacy-preserving, and globally verifiable digital passport system suitable for real-world deployment.
- New
- Research Article
- 10.1007/s11042-026-21163-3
- Feb 6, 2026
- Multimedia Tools and Applications
- Meenu Suresh + 1 more
A comprehensive review based on video steganography for secure data transmission
- New
- Research Article
- 10.2196/76862
- Feb 5, 2026
- JMIR mHealth and uHealth
- Qimeng Zhao + 4 more
The use of mobile health (mHealth) apps can assist with the management of gestational diabetes (GDM). Although a number of studies have demonstrated their efficacy in improving maternal-fetal outcomes, opinions differ regarding their usability and overall quality. Poorly designed apps, with ill-conceived features or inappropriate content, may pose a threat to patient safety. Nevertheless, very few studies provide in-depth evaluations of app design quality, and the diversity of features and techniques used remains insufficiently explored. We aimed to evaluate the quality and multifunctionality of commercially available mHealth apps for GDM. This is a systematic app review guided by the TECH (target user, evaluation focus, connectedness, and health domain) framework and the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) 2020 checklist. Searches were conducted on the Apple App Store and Google Play. Apps were screened by name, description, and full navigation to identify inclusions. The quality of the apps was evaluated using the Mobile App Rating Scale and IMS Institute for Healthcare Informatics Functionality Score. Multifunctionality of the apps was evaluated using the GDM-adapted features and techniques list developed from the App Behavior Change Scale, NICE (National Institute for Health and Care Excellence) 2015 guidelines, and previous studies. The general features list, which contains instruction, data security, customization, and technical issues, was derived from previous studies. The search (June 2024) identified 23 commercially available apps from UK app stores. The overall app quality was evaluated to be satisfactory (Mobile App Rating Scale: mean 4.0, SD 0.36; IMS Institute for Healthcare Informatics Functionality Score: mean 5.83, SD 3.03). The multifunctionality evaluation found that the apps had a mean of 17.95 and SD of 7.31 across all 45 items. Overall, our findings suggested that mHealth apps for GDM achieved a certain level of multifunctionality. However, their feature types and supporting digital techniques are relatively basic. The apps focused on education and managing blood glucose control rather than integrating other self-monitoring data and pregnancy-relevant management into their design. The digital techniques used to achieve these features included text and manual operation, rather than other automated features. This is the first app review to consider the relationship between app features and usability for women with GDM. Future app development should integrate a wide range of pregnancy-relevant information and more automated features and use advanced digital techniques to enable a holistic digital solution for women with GDM.
- New
- Research Article
- 10.15849/ijasca.v18i1.52
- Feb 5, 2026
- International Journal of Advances in Soft Computing and its Applications
- Sami Morsi + 5 more
Next-generation genomic data analysis faces ongoing challenges in collaboration, privacy, and scalability. With data protection and no access control deficiency, centralized systems lack sufficient protection for sensitive genomic data. The first-of-its-kind analysis of genomics combined with integrated machine learning and homomorphic encryption with the new privacy-preserving computational framework will be revolutionary. The use of smart contracts, which is unlike other frameworks, for access control, tokenized encrypted, and suspended federated model training during the suspension of other research nodes will be unprecedented. Simulation of genomic datasets (0 Major issues were identified in file sync performance and data protection and security) Vis-a-vis the other frameworks, the Model Proposed outperformed traditional, federated and HE based frameworks with significant. Of the models proposed, this one is the most impressive with a score of 94% precision, 92% recall, 0.93 computed F1 score, and 0.96 Area Under Curve (AUC). The model with the best performance. With 5K genomic datasets in a pilot simulation, collaboration improved by 25% with no breaches in the datasets. The promise of this design to ethically and securely address privacy-preserving genomic data analysis and the subsequent use of Artificial Intelligence in biomedical systems will be groundbreaking.
- New
- Research Article
- 10.1080/23742917.2026.2625461
- Feb 5, 2026
- Journal of Cyber Security Technology
- Mohammad Adel El Sehayl + 3 more
ABSTRACT The Interplanetary File System (IPFS) is a decentralized peer-to-peer (P2P) protocol for distributed file storage and sharing. It is one of the main pillars towards reaching the Web3 technology, which depends heavily on decentralization. IPFS ensures more control over stored data even across untrusted nodes. However, IPFS lacks various security measures, such as encryption, to ensure the confidentiality of the stored data. This paper suggests a parallel encryption engine incorporated within IPFS to enhance data security while maintaining high performance and speed. The paper specifically proposes a novel parallelized symmetric encryption framework that encrypts data chunks before distributing them across the IPFS network. Also, the engine uses hardware-accelerated instructions to ensure speedup and robustness. Various factors were considered to evaluate the research contributions, such as encryption speed, storage overhead, and retrieval efficiency. The obtained results signify the importance of incorporating encryption into IPFS to ensure data privacy without compromising performance. Furthermore, unauthorized access and data leakage can be prevented through encryption, enabling IPFS to become more suitable for sensitive data storage in decentralized environments. In general, the contributions of this research support the advancement towards Web3 by protecting users’ data without aggravating the IPFS system efficiency and performance.
- New
- Research Article
- 10.1007/s00414-025-03640-w
- Feb 4, 2026
- International journal of legal medicine
- Mariana Cura + 4 more
Since 2018, the General Data Protection Regulation (GDPR) has regulated personal data within the scope of the European Union. With the exponential technological advancements in mobile photography, it is crucial to expose forensic professionals to this body of law to maintain good practices for fieldwork and scientific research in this field. GDPR, as far as its application in forensic photography, can be broken down into four pillars: informed consent of the subject, acceptable image capture practices (data), data storage and security at rest, and data transfers and security in transit. All these pillars have different approaches currently in use by forensic professionals. However, only some of them are permitted under the law. We present the appropriate ways to proceed with smartphone photography while remaining in compliance and maintaining the ability to share data critical to fieldwork and scientific research. In addition, some of the common pitfalls are described. An algorithm is proposed to facilitate compliance with European regulations relating to personal data, as applied to mobile forensic photography. The same flow chart can be used in other countries with different regulations concerning health data, privacy, and security issues.
- New
- Research Article
- 10.1002/itl2.70232
- Feb 4, 2026
- Internet Technology Letters
- Jiaojiao Qin + 1 more
ABSTRACT Integrating blockchain technology with Internet of Things (IoT) networks presents opportunities and challenges for sustainable computing. While blockchain ensures secure and transparent data management, its energy‐intensive nature poses significant environmental concerns, particularly in resource‐constrained IoT environments. This paper proposes SERO‐DRL, a novel deep reinforcement learning approach for energy‐efficient resource optimization in blockchain‐enabled sustainable IoT networks. We develop a comprehensive framework that jointly optimizes computational offloading and resource allocation while considering renewable energy availability and environmental impact. The framework includes an innovative reward mechanism that incentivizes energy‐efficient behavior while ensuring fair resource allocation among IoT devices. Experimental results demonstrate SERO‐DRL's superior performance, achieving an 18.5% reduction in total system costs and a 40% decrease in environmental impact compared to baseline approaches.
- New
- Research Article
- 10.55606/eksekusi.v4i1.2290
- Feb 3, 2026
- Eksekusi : Jurnal Ilmu Hukum dan Administrasi Negara
- Moch Gufron Fajar Rezki + 2 more
Online arbitration has emerged as a significant innovation in dispute resolution systems in the digital era, as information technology has become the primary foundation for various legal activities. This mechanism offers a new way to resolve disputes through the use of digital platforms that enable parties to interact without geographical boundaries. This study aims to analyze the relevance, challenges, and changes in legal processes brought about by online arbitration in the modern context. Using a juridical-normative method, the study examines the applicable legal framework, doctrine, and academic literature to understand how digitalization affects arbitration procedures. The analysis shows that online arbitration has strategic value because it can provide efficiency, flexibility, and accessibility not always found in conventional arbitration. However, its implementation still faces various issues, such as data security, technological capability gaps, the integrity of electronic evidence, and the lack of comprehensive legal standards. On the other hand, digitalization has also driven significant changes in the structure of procedural law, including the simplification of procedures and the expansion of the recognition of electronic evidence. This study confirms that the success of online arbitration requires regulatory harmonization, increased technical capacity of the parties, and strengthening of digital infrastructure so that it can function as an effective, fair, and adaptive dispute resolution mechanism to technological developments.
- New
- Research Article
- 10.1080/09613218.2026.2621314
- Feb 3, 2026
- Building Research & Information
- Faizan Hamayat + 6 more
ABSTRACT Energy efficiency is vital yet underutilized in buildings. Reducing energy consumption while maintaining human-level comfort within certain boundaries requires accurate indoor air temperature (IAT) modelling. IAT prediction models support HVAC optimization, setting operational limits, and detecting discrepancies between predicted and actual conditions for predictive model control. However, accurately predicting IAT in large-scale smart buildings is challenging due to numerous complex factors. To address this issue, this paper presents two data-driven hybrid models for accurate IAT prediction. The first model, STNet, integrates a CNN with a Bi-LSTM, while the second model, STProphet, combines a CNN with Transformers to capture spatial–temporal dependencies. Both models are deployed on an edge device to enhance data security and privacy. Experimental evaluation shows significant improvements over a baseline method. STNet reduces MAE, RMSE, and MAPE by 75.74%, 68.58%, and 76.92%, respectively. STProphet achieves reductions of 72.44%, 66.58%, and 73.76% for the same metrics. Inference efficiency also improves substantially: STNet reduces latency by 53.64% (to 51 ms) and STProphet by 68.18% (to 35 ms), compared with the baseline’s 110 ms. The results confirm the effectiveness of the proposed models for real-time IAT prediction, supporting more reliable energy modelling and optimization in large-scale smart buildings.
- New
- Research Article
- 10.24191/gading.v29i1.685
- Feb 2, 2026
- Gading Journal for the Social Sciences (e-ISSN 2600-7568)
- Aidrina Mohamed Sofiadin
With the rapid expansion of the Metaverse, e-commerce has undergone transformative changes, offering immersive shopping experiences. This technological evolution, however, brings forth a range of ethical concerns. This study examines the perspectives of students on these ethical issues within metaverse e-commerce, utilising focus group discussions as the primary research method. 38 participants from the e-commerce course engaged in discussions that revealed several key concerns. Among these are issues related to privacy and data security, with students expressing fears about personal information misuse and inadequately protected digital identities. Additionally, the focus group identified the need for greater transparency and accountability from e-commerce platforms operating in the Metaverse. Participants suggested developing ethical guidelines and regulatory measures to protect consumers from ethical infringements, emphasising the role of education in equipping users with the skills to navigate these virtual spaces responsibly. This study contributes to the growing body of literature on the ethical dimensions of emerging digital environments. Highlighting the students’ concerns and recommendations underscores the need for guidelines to ensure that metaverse e-commerce develops in an ethically responsible manner.
- New
- Research Article
- 10.70382/mejnsar.v11i9.076
- Feb 2, 2026
- International Journal of Nature and Science Advance Research
- Busayo Temitope Achori + 2 more
This study presents a secure smart token life cycle framework for decentralized Electronic Medical Records (EMRs), aimed at enhancing data security, privacy, access control, and interoperability within healthcare information systems. The scope of the research focuses on leveraging decentralized technologies and smart tokens to address persistent challenges associated with centralized EMR systems, including data breaches, unauthorized access, and limited patient control over medical data. A design-based research methodology was adopted, involving system modeling, framework design, and comparative analysis of existing EMR architectures. Data were collected through an extensive review of related literature, analysis of current decentralized health record systems, and evaluation of security requirements in healthcare environments. The findings reveal that integrating smart tokens with a decentralized architecture significantly improves data integrity, traceability, secure access management, and patient-centric control of medical records. The proposed framework ensures a secure token life cycle encompassing token generation, authorization, usage, revocation, and auditability. In conclusion, the study demonstrates that a secure smart token life cycle framework can effectively strengthen the security and reliability of decentralized EMRs. It is recommended that future implementations incorporate real-world pilot testing, scalability assessments, and compliance with healthcare regulatory standards to further validate and enhance the framework’s practical applicability.
- New
- Research Article
- 10.1186/s40621-026-00660-x
- Feb 1, 2026
- Injury epidemiology
- Rohit P Shenoi + 5 more
Drowning is the leading cause of death in US children 1-4 years old. The epidemiology of drowning at a regional level is understudied because no single data source provides complete information on persons who drown. Probabilistic data linkage is a novel way of studying the epidemiology of drowning. This study aimed to document the lessons learned during the linkage process. This was a cross-sectional study of persons of all ages who died from unintentional drowning in metropolitan Houston from 2016 to 2022. We describe the lessons learned during the project planning and execution phases which pertained to data curation, the regulatory aspects involved with obtaining data, data security, spatial identification, and the strengths and limitations of the different datasets. Twelve datasets were reviewed; eight were successfully linked. During the planning phase, the key issues identified pertained to data ownership and governance and robustness of data which impacted the availability and quality of data, variation in the description of drowning location, and risk and protective factors which helped identify subpopulations at-risk for drowning. In the execution phase, the major issues included data security, data sharing, and dissemination of results. There are a plethora of data sources for fatal drowning. The process of obtaining and analyzing data to describe the epidemiology of fatal drowning using probabilistic data linkage is complex, lengthy, and cumbersome. Documenting the process and lessons learned can support drowning research and inform regional drowning prevention strategies.
- New
- Research Article
- 10.14207/ejsd.2026.v15n1p331
- Feb 1, 2026
- European Journal of Sustainable Development
- Іrіna Lomachinska + 7 more
This article examines the contemporary opportunities and limitations of using large language models(LLMs), including ChatGPT, in higher education and scientific research. It outlines the technologicalfoundations of LLMs, highlighting their capabilities for context-aware dialogue, language synthesis,automated assessment, and personalization of learning pathways, including applications in languagelearning and intercultural communication that can support learner autonomy and communicativecompetence. The study emphasizes the potential of AI to enhance the quality of education, supportpedagogical decision-making, and improve the management of educational processes. At the sametime, key risks are identified, including informational biases, reliance on training data, the potentialgeneration of inaccurate content, threats to privacy, and challenges to academic integrity. Ethicalconsiderations are discussed, focusing on algorithmic transparency, data security, researcheraccountability, and the prevention of discriminatory effects. The article also presents key strategies foraddressing these challenges, including the development of information and ethical literacy, theestablishment of transparent university policies, clarification of scientific publication requirements,and implementation of guidelines for responsible LLM use. The study concludes that effectiveintegration of LLMs into academic environments requires a balanced combination of innovativepotential and ethical safeguards to ensure the integrity of education and scientific research. Keywords: large language models (LLMs), higher education, personalized learning, academic integrity, artificial intelligence, digital literacy, sustainable development, ethical AI, communication
- New
- Research Article
- 10.32479/irmm.22265
- Feb 1, 2026
- International Review of Management and Marketing
- Mansoor Ahmad Qazi + 1 more
This research explored the factors essential to data privacy and security in takaful institutions and proposed digital platforms (DPs) to enhance them, mediated by E-knowledge sharing. Moreover, the moderating role of AI adoption on the connection between E-knowledge sharing and data privacy and fraud detection has also been tested. The current study was conducted across various takaful (Islamic insurance) institutions in the United Arab Emirates (UAE) using a survey. Online questionnaire forms were used to collect data, which were subsequently analysed using statistical techniques, including correlation, partial least squares structural equation modelling, and bootstrapping, yielding several interesting results for the formulated hypotheses. The findings confirm the prediction that digital platforms can enhance data privacy and fraud detection. Moreover, the findings confirmed the mediating role of E-sharing in the direct effects of digital platforms, data privacy, and fraud detection. Finally, the findings revealed that AI adoption strengthens the connection between E-knowledge sharing and data privacy, as well as fraud detection.