Published in last 50 years
Articles published on Life Cycle Data
- Research Article
- 10.32996/jcsts.2025.7.5.61
- Jun 3, 2025
- Journal of Computer Science and Technology Studies
- Pushap Goyal
This article explores the dual nature of databases in modern cybersecurity ecosystems, examining how they function both as critical assets requiring protection and as powerful defensive tools against sophisticated threats. As organizations undergo digital transformation, the exponential growth in data volume has created unprecedented challenges for security teams. The article discusses how database technologies have evolved from basic log management systems to advanced distributed platforms, addressing increasingly complex security requirements. Particular attention is given to Google's Spanner Graph technology, which combines relationship-focused structures with globally distributed architecture to transform threat detection capabilities. The article details how the evolution of database administrators' roles reflects growing security concerns, with DBAs now spending a majority of their time on security-related activities. Database security in interconnected digital ecosystems is examined, highlighting varying maturity levels across ecosystem components and throughout the data lifecycle. The article identifies key challenges in security data management, including scale and performance issues, data heterogeneity, risk management complexities, and regulatory compliance burdens. Through detailed assessment of database technology generations, from first-generation log management systems to fifth-generation distributed ledger platforms, the article demonstrates how each advancement has addressed previous limitations. The transformative capabilities of Spanner Graph are extensively analyzed, focusing on its global consistency through TrueTime, relationship-based threat detection, real-time anomaly detection at scale, unified visibility across security domains, temporal analysis features, and adaptive security posture through graph analytics.
- Research Article
- 10.1200/jco.2025.43.16_suppl.e23213
- Jun 1, 2025
- Journal of Clinical Oncology
- Phylicia Gawu + 12 more
e23213 Background: Climate change is a pressing issue on the global stage. Recently a comprehensive lifecycle assessment (LCA) of external beam radiotherapy (EBRT) for cancer delineated the environmental and secondary health impacts of radiotherapy in the United States (US) (PMID 38821084). The continent of Africa is warming faster than any other in the world, leaving Africa to face the most disproportionate burden worldwide arising from climate change. Thus far, an LCA of EBRT has yet to be performed in Africa. We report initial pilot data on breast cancer patients treated in Africa with EBRT as a first step towards LCA analysis. As breast cancer is the most common indication for EBRT (and the most common cancer worldwide in women), it represents an optimal disease site to initiate LCA analysis. These findings represent the first assessment of the complete components of an LCA in Africa, using our experience from a Ghanaian hospital. Methods: Data collection was performed using the ISO 14040 and 14044 standards as a guide in accordance with PMID 38821084. The scope of the study was defined as one round of curative intent EBRT from initial consultation through delivery of the last fraction. LCA data components were comprised from breast cancer patients receiving adjuvant EBRT at Korle-Bu Teaching Hospital (KBTH), Ghana from 2021-2024. Data for a complete life cycle of adjuvant EBRT for breast cancer consisted of medical supplies, equipment, patient and staff travel, and building energy usage. Results: Ten breast cancer patients were assessed for data collection, of which six received 50 Gray (Gy) in 25 fractions; the remaining four received 40.05 Gy/15 fractions. Patients received EBRT via a cobalt machine (n = 9) or linear accelerator (n = 1). Medical supplies were grouped into reusable and single use items. For initial consultation, patients traveled median 13.9 km (8.6 mi), and median distance traveled by staff was 10.5 km (6.5 mi). CT simulation was used for planning; peer review and weekly on-treatment visits were performed by a Radiation Oncologist while pre-treatment quality assurance was completed by a medical physicist. During treatment, patients traveled median 15.3 km (9.5 mi), and radiation therapists traveled a median of 11.8 km (7.3 mi). Most associated with the radiation delivery process used public transit for travel. Clinic energy usage was in accordance with previously reported data (PMID: 37552912). Conclusions: Given the importance of radiotherapy in the treatment of cancer (involved in half of all cancers treated), an accurate LCA analysis of EBRT is essential for combating climate change worldwide. This study represents the first comprehensive accumulation of LCA data for the continent of Africa. Further analysis will involve assessment of these parameters to create an LCA which will have far reaching impact not only in breast cancer but in other disease sites both in Africa and worldwide.
- Research Article
- 10.1177/20539517251352815
- Jun 1, 2025
- Big Data & Society
- Asher Brandon Caplan + 1 more
Data interoperability poses unique ethical challenges across a range of academic, industrial, and governmental implementations of data systems. Central to data interoperability is the design of systems and protocols for exchanging or integrating data from different initial source domains. Data interoperability is often regarded as necessary for carrying out tasks between different organizations and suborganizations as well as for ensuring secondary use of data for research purposes. However, interoperability poses a number of ethical problems whose contours can prove especially challenging in comparison to how ethical harms take hold at other moments of the data life cycle (such as algorithmic processing or results dissemination). Taking biomedical data interoperability as a focal domain, this article provides an overview of data interoperability, maps the central ethical harms that may challenge interoperability projects, and proposes a response to these problems through an approach rooted in philosophical pragmatism. Pragmatist responses to both individual and structural harms of interoperability are presented through three companion strategies: shared standards, manual data curation, and meticulous data documentation.
- Research Article
- 10.1016/j.prevetmed.2025.106512
- Jun 1, 2025
- Preventive veterinary medicine
- Xiao Zhou + 12 more
The use of data for health and welfare management of farmed salmons in Norway, Scotland, and Ireland.
- Research Article
- 10.22214/ijraset.2025.71583
- May 31, 2025
- International Journal for Research in Applied Science and Engineering Technology
- Vandana Malik
The exponential increase in data generation, propelled by the Internet of Things (IoT), social media, mobile technology, and cloud computing, has ushered in the era of big data. While big data enables unprecedented insights and innovations, it also exposes organizations to sophisticated security threats. This paper provides an in-depth analysis of big data security threats, examining the sources, vectors, and impacts of these threats. Furthermore, the paper reviews current mitigation strategies, highlights gaps in existing approaches, and suggests future research directions. Key topics include data lifecycle security, cloud vulnerabilities, distributed architecture risks, and the importance of governance and compliance frameworks.
- Research Article
- 10.5334/dsj-2025-020
- May 30, 2025
- Data Science Journal
- Rosini + 3 more
This research attempts to analyse and reveal the scientific trends in research data management (RDM) related to environmental studies through the scoping review and bibliometric analysis. The investigation was conducted on five databases, including one journal, Scopus, EBSCO, Science Direct, Sage Journals, Emerald, and Nature. The search results on RDM topics in environmental studies discovered 248 papers that met the requirements. The Scoping Review framework and bibliometric analysis were used as a methodology, and VOSviewer and Bibliometrix were used as analytical tools. The results showed that publications on RDM in environmental studies were found in 1985 but experienced a significant increase starting in 2012, with peaks in 2020 and 2021. The most co-occurrences of keywords are RDM, data management, information management, research data, and metadata. The themes most studied in environmental studies on RDM are FAIR principles, open data, integration and infrastructure, data management tools and infrastructure, technology and innovation. The themes for further research in RDM in environmental studies are data life cycle, research data, data sharing and collaboration, data curation, research data management, and data management. In conclusion, this study provides an overview of RDM in environmental studies, highlighting its pattern, advances, gaps and research recommendations.
- Research Article
- 10.30574/wjarr.2025.26.2.1942
- May 30, 2025
- World Journal of Advanced Research and Reviews
- Adarsha Kuthuru
This article introduces a novel governance framework addressing the unique challenges of managing generative AI data within database systems. While extensive literature examines responsible AI principles in theory, a significant gap exists in translating these ethical frameworks into practical implementation at the database layer. The article presents a comprehensive approach that bridges this divide through a layered architecture incorporating fine-grained access controls, comprehensive lineage tracking, and automated policy enforcement mechanisms specifically designed for generative AI workloads. The article addresses distinctive challenges, including complex data transformations, synthetic content generation, purpose limitation in repurposed data, and evolving consent requirements that traditional governance models fail to adequately manage. The article demonstrates substantial improvements in governance effectiveness compared to conventional approaches. This article provides database administrators and AI practitioners with concrete strategies for maintaining ethical boundaries throughout the data lifecycle while enabling responsible innovation. The framework establishes a foundation for operationalizing AI ethics at the infrastructure level, ensuring that governance considerations become integral to system design rather than retrospective considerations
- Research Article
- 10.1515/jib-2025-0012
- May 30, 2025
- Journal of Integrative Bioinformatics
- Danuta Schüler + 14 more
The Leibniz Institute of Plant Genetics and Crop Plant Research (IPK) Gatersleben is a leading international plant science institute specializing in biodiversity and crop plant performance research. Over the last decade, all phases of the research data lifecycle were implemented as a continuous process in conjunction with information technology, standardization, and sustainable research data management (RDM) processes. Under the leadership of a team of data stewards, a research data infrastructure, process landscape, capacity building, and governance structures were successfully established. As a result, a generic research data infrastructure was created to serve the principles of good scientific practice, archiving research data in an accessible and sustainable manner, even before the FAIR criteria were formulated. In this paper, we discuss success stories as well as pitfalls and summarize the experiences from 15 years of operating a central RDM infrastructure. We present measures for agile requirements engineering, technical and organizational implementation, governance, training, and roll-out. We show the benefits of a participatory approach across all departments, personnel roles, and researcher profiles through pilot working groups and data management champions. As a result, an ambidextrous approach to data management was implemented, referring to the ability to efficiently combine operational needs, support daily tasks in compliance with the FAIR criteria, while remaining open to adopting technical innovations in an agile manner.
- Research Article
- 10.30574/wjaets.2025.15.2.0747
- May 30, 2025
- World Journal of Advanced Engineering Technology and Sciences
- Jyotirmay Jena
As data becomes the most critical digital asset in the modern enterprise, traditional security approaches struggle to keep pace with the evolving threat landscape and fragmented data environments. Data Security Posture Management (DPSM) emerges as a transformative strategy that enables organizations to gain continuous visibility, assess risks, and enforce security policies across structured and unstructured data—whether on-premises, in the cloud, or in hybrid ecosystems. This article presents a unified and adaptive DPSM framework that integrates discovery, classification, access governance, risk prioritization, and automated remediation. By aligning with zero-trust principles and leveraging AI-driven analytics, the proposed approach enhances data security resilience, ensures regulatory compliance, and reduces the attack surface across the entire data lifecycle. Through real-world use cases and implementation insights, the article demonstrates how DPSM empowers security teams to proactively safeguard sensitive information in today’s dynamic, data-centric landscape.
- Research Article
- 10.1108/ajim-12-2024-0959
- May 29, 2025
- Aslib Journal of Information Management
- Anna Sendra + 2 more
PurposeThe aim of this study is to examine research data management practices among scholars in the social sciences and humanities who engage in data-intensive research. Additionally, the study extends an existing data lifecycle model tailored to these disciplines by incorporating scholars’ perceived needs for research data support services.Design/methodology/approachSemi-structured interviews (n = 21) were conducted with scholars of various levels of experience in data-intensive social sciences and humanities research. A qualitative content analysis focused on research data management practices was applied to the material.FindingsUnmet needs in terms of existing infrastructure (e.g. repositories) and services are affecting the research data management practices in data-intensive social sciences and humanities research, where less common tasks include data sharing and reuse. Based on these perceived requirements, an improved version of the Data Documentation Initiative Lifecycle that includes the support needs required for effectively managing data throughout the research process is developed.Originality/valueThe study contributes to improving the development of research data services aimed at data-intensive social sciences and humanities research by presenting a research activity model that better represents from the perspective of scholars the evolving research data management practices in these disciplines. The study also provides a deeper understanding of the support needs derived from the increasing digitalization of social sciences and humanities research.
- Research Article
- 10.36680/j.itcon.2025.034
- May 27, 2025
- Journal of Information Technology in Construction
- David F Bucher + 5 more
The management of lifecycle data poses significant challenges for the built environment, hindering effective transformation toward important concepts such as a circular economy. Many recent scholars propose blockchain technologies as a solution; however, there is almost no investigation into decentralized data networks, which also offer significant potential for lifecycle data management. This might be due to a lack of clarity in understanding the fundamental characteristics and potential use cases for decentralized data networks. Therefore, this paper combines a comprehensive review with inductive reasoning to classify three functional typologies—immutable, comprehensive, and privacy-centric – of decentralized data networks. Through testing with material passport data, we evaluate the practical implications of these typologies for lifecycle data management in the built environment. The findings highlight that decentralized data networks can improve data sovereignty and interoperability, but their effectiveness depends on use-case-specific trade-offs, such as mutability, access control, and storage location control. To navigate these trade-offs, the paper derives a decision framework that guides practitioners and researchers in selecting the most suitable decentralized data network. These insights contribute to a better understanding of decentralized technologies beyond blockchain and provide actionable recommendations for the future of data management in the built environment.
- Research Article
- 10.3390/sym17060820
- May 24, 2025
- Symmetry
- Jinhui Liu + 5 more
With the development of Internet of Things and artificial intelligence, large amounts of data exist in our daily life. In view of the limitations in current data security risk assessment research, this paper puts forward an intelligent data security risk assessment method based on an attention mechanism that spans the entire data lifecycle. The initial step involves formulating a security-risk evaluation index that spans all phases of the data lifecycle. By constructing a symmetric mapping of subjective and objective weights using the Analytic Hierarchy Process (AHP) and the Entropy Weight Method (EWM), both expert judgment and objective data are comprehensively considered to scientifically determine the weights of various risk indicators, thereby enhancing the rationality and objectivity of the assessment framework. Next, the fuzzy comprehensive evaluation method is used to label the risk level of the data, providing an essential basis for subsequent model training. Finally, leveraging the structurally symmetric attention mechanism, we design and train a neural network model for data security risk assessment, enabling automatic capture of complex features and nonlinear correlations within the data for more precise and accurate risk evaluations. The proposed risk assessment approach embodies symmetry in both the determination of indicator weights and the design of the neural network architecture. Experimental results indicate that our proposed method achieves high assessment accuracy and stability, effectively adapts to data security risk environments, and offers a feasible intelligent decision aid tool for data security management.
- Research Article
- 10.59490/dgo.2025.1008
- May 22, 2025
- Conference on Digital Government Research
- Dimitrios Symeonidis + 1 more
Digital transformation is increasingly reshaping the public and private sectors by enhancing the efficiency and quality of services. With the integration of emerging technologies such as Artificial Intelligence (AI), blockchain, and Internet of Things (IoT), this transformation is becoming a key driver in achieving the United Nations Sustainable Development Goals (SDGs). This shift has given birth to concept of the “triple transition” emphasizes the interconnected the social, green and digital transitions as part of a systemic approach to achieve the SDGs. However, for these technologies to generate meaningful public value, they must rely on high-quality, accessible, and interoperable data. Public Data Ecosystems (PDEs) , as networks of stakeholders engaging in data exchange across the data lifecycle, provide a foundation for transparency and accountability as elements of public value. Their capacity to create broader societal and economic value remains limited without the synergy of advanced digital technologies. To this end, this study proposes the concept of Triple Transition Ecosystems (TTEs) networks of actors leveraging both PDEs and the four-intelligence (4I) paradigm (Data, Artificial, Collective, and Embodied Intelligence) to generate multidimensional public value aligned with the SDGs. Using a systematic literature review that includes thematic analysis informed by public value frameworks, we examine the potential of TTEs across various policy domains. Our findings indicate that TTEs have the potential to generate public value in terms of better service quality and governance, but also higher societal value. By conceptualizing TTEs, this study offers a novel framework for understanding digital transformation as a systemic enabler of sustainable development and provides actionable insights for researchers and policymakers seeking to design triple transition–oriented policies.
- Research Article
- 10.52783/jisem.v10i48s.10189
- May 19, 2025
- Journal of Information Systems Engineering and Management
- Trupti Lotlikar
In the digital age, businesses gather and keep enormous volumes of user data, frequently requiring the explicit consent of the user for processing and storage. However, it is still exceedingly difficult to guarantee total data erasure upon consent revocation, especially in systems that have disaster recovery databases and synced data centers. The Consent-Driven Data Erasure System presented in the paper is intended to solve this problem by enabling the automated deletion of sensitive and personal data upon user revocation of consent. MS SQL Server is used to create the suggested system, where sensitive information, including payment details, is kept in a separate Consented Data Table and user registration details are kept in a Login Table. Personal information is stored in the consented table automatically when a user registers and accepts the terms and conditions. The solution guarantees total and irreversible data erasure by deleting all associated data from both the primary data center and the disaster recovery database when users withdraw their consent. In order to accomplish this, we implement stored procedures and database triggers that control ongoing synchronization and deletion operations. In order to address concerns about unlawful data retention, the system makes sure that privacy laws like the GDPR and the Digital Personal Data Protection (DPDP) Act are followed. Our findings show that this strategy minimizes privacy threats, improves user control over personal data, and creates a strong foundation for consent-based data lifecycle management in digital platforms.
- Research Article
- 10.32996/jcsts.2025.7.4.81
- May 19, 2025
- Journal of Computer Science and Technology Studies
- Narendra Reddy Sanikommu
This article explores the challenges of managing cardinality in observability data within modern distributed systems. Cardinality - the number of unique values in fields such as metric labels, log attributes, and trace identifiers presents a significant operational concern for organizations maintaining large scale systems. When left unmanaged, high cardinality can lead to substantial performance degradation and cost escalation. It examines the nature of cardinality explosion, where unique value combinations grow uncontrollably, and its impact on query performance, storage costs, processing efficiency, and alert management. It then presents comprehensive strategies for effective cardinality management, including strategic label design, aggregation techniques, sampling methods, data lifecycle policies, cardinality-aware tooling, and data partitioning approaches. Through case studies and research findings, the article demonstrates how organizations have successfully implement these strategies to maintain essential visibility while dramatically improving system performance and reducing infrastructure costs. The work concludes with guidance on monitoring cardinality itself as a critical operational metric to ensure sustainable observability practices.
- Research Article
1
- 10.1016/j.dib.2025.111666
- May 14, 2025
- Data in Brief
- Elena Rozzi + 2 more
Life cycle inventory dataset for energy production and storage technologies: Standardized metrics for environmental modeling
- Research Article
- 10.1371/journal.pone.0322202
- May 8, 2025
- PloS one
- Huijing Zhai + 1 more
In view of the shortcomings of power engineering cost in precision and dynamic in big data environments, this paper proposes building information modelling (BIM) and spatiotemporal modelling-based dynamic graph convolutional neural networks (DynGCN). This study uses the characteristics of BIM technology to carry out the cost management of the whole life cycle of power engineering, and realizes the dynamic control of the cost. In addition, the DynGCN method is used to predict the cost of each engineering link, so as to optimize the construction scheme of the whole project. The results show that the whole life cycle data management supported by BIM technology improves the real-time monitoring and adjustment ability of the cost; the DynGCN method can greatly improve the accuracy of the cost prediction, and the prediction accuracy is 96%, which is closest to the real value of the cost.
- Research Article
- 10.3390/su17094205
- May 7, 2025
- Sustainability
- Nishan Adhikari + 2 more
The Paris Agreement’s pressing global mandate to limit global warming to 1.5 degrees Celsius above pre-industrial levels by 2030 has placed immense pressure on energy-consuming industries and businesses to deploy robust, advanced, and accurate monitoring and tracking of carbon footprints. This critical issue is examined through a systematic review of English-language studies (2015–2024) retrieved from three leading databases: Scopus (n = 1528), Web of Science (n = 1152), and GreenFILE (n = 271). The selected literature collectively highlights key carbon footprint tracking methods. The resulting dataset is subjected to bibliometric and scientometric analysis after refinement through deduplication and screening, based on the PRISMA framework. Methodologically, the analysis integrated the following: (1) evaluating long-term trends via the Mann–Kendall and Hurst exponent tests; (2) exploring keywords and country-based contributions using VOSviewer (v1.6.20); (3) applying Bradford’s law of scattering and Leimkuhler’s model; and (4) investigating authorship patterns and networks through Biblioshiny (v4.3.0). Further, based on eligibility criteria, 35 papers were comprehensively reviewed to investigate the emerging carbon footprint tracking technologies such as life cycle assessment (LCA), machine learning (ML), artificial intelligence (AI), blockchain, and data analytics. This study identified three main challenges: (a) lack of industry-wide standards and approaches; (b) real-time tracking of dynamic emissions using LCA; and (c) need for robust frameworks for interoperability of these technologies. Overall, our systematic review identifies the current state and trends of technologies and tools used in carbon emissions tracking in cross-sectors such as industries, buildings, construction, and transportation and provides valuable insights for industry practitioners, researchers, and policymakers to develop uniform, integrated, scalable, and compliant carbon tracking systems and support the global shift to a low-carbon and sustainable economy.
- Research Article
- 10.1108/lm-06-2024-0070
- May 2, 2025
- Library Management
- Tinyiko Vivian Dube
PurposeThis study aims to conceptualize the application and management of research data in academic libraries through institutional repositories. The objectives of the study are to determine the role of academic libraries in managing research data, to explore the ethical issues related to research data management (RDM) services and to determine stakeholders involved in the success of RDM.Design/methodology/approachThe study employs a qualitative research design within the interpretive paradigm, using content analysis to explore RDM in academic libraries and institutional repositories. The research aims to determine the role of academic libraries in managing research data, explore ethical issues related to RDM services and identify key stakeholders. Literature was sourced from databases like Emerald Insight, Scopus and Google Scholar, focusing on publications from 2020 to 2024. Case studies from institutions such as the University of Pretoria and Stellenbosch University illustrated practical RDM implementations. Ethical considerations were strictly adhered to, ensuring proper citation and adherence to RDM guidelines.FindingsThe reviewed literature established the significance of managing research data through institutional repositories while highlighting the research data lifecycle, stakeholders involved in the success of RDM and ethical issues related to RDM services. RDM involves stakeholders such as institutional researchers, government and funding agencies, university leadership and research support units.Research limitations/implicationsThis study demonstrated the importance of effective RDM practices in enhancing transparency, reproducibility and efficiency in academic research. Institutional repositories play a crucial role in preserving and making research data accessible, thereby promoting interdisciplinary collaboration and increasing citation rates.Practical implicationsThe study provided actionable recommendations for academic libraries to support researchers in complying with RDM policies through training, clear guidelines and user-friendly repository interfaces. These strategies enhance the effectiveness of RDM practices and ensure regulatory compliance.Social implicationsThe study underscores the need for regulatory frameworks that promote open science and data sharing while ensuring ethical guidelines for data privacy and informed consent. It also highlights well-managed research data’s economic and commercial benefits, such as facilitating industry–-academia collaboration.Originality/valueThis study is significant as it contributed to the body of knowledge and theoretically motivated how institutional repositories can be of value in reserving research data by highlighting the benefits and significance of sharing research data. A proper RDM increases the opportunities for funders, institutions, publishers and libraries to redesign policies that govern research data sharing.
- Research Article
- 10.30574/wjarr.2025.26.1.0538
- Apr 30, 2025
- World Journal of Advanced Research and Reviews
- Ifeyinwa Nkemdilim Obiokafor + 2 more
Globally, the increase in cyber-attacks and data breaches in the coming years has been predicted by reputable sources. The latest statistics from Cybersecurity Ventures project successful cyber-attacks could cost businesses over $10.5 trillion annually by 2025. In this context, information systems and software solutions have to change, as traditional practices, by incorporating security controls at later stages of development. 'Privacy by Design' (PbD) is attracting considerable resources, focus, and logically encouraging data protection as best practice applicable across the data lifecycle. However, the implementation of the PbD principle remains a challenge. Numerous developers cannot strike equilibrium between ‘functionality’ and privacy due to insufficient guidelines and resources. Many organizations with appropriate leadership have achieved higher levels within the boundaries of IT that effectively integrate PbD, while others are constantly trying to catch up. This paper aims to fill these gaps by incorporating different research outcomes, statistics, and best practices for incorporating privacy in information systems design practice. This contribution will assist IT practitioners in mitigating data breaches and adherence to the changing privacy laws which in the long run improve user confidence and data security in the systems being used