Published in last 50 years
Articles published on Life Cycle Data
- Research Article
- 10.7191/jeslib.937
- Jul 9, 2025
- Journal of eScience Librarianship
- Kristin Briney
Objective: There are limited opportunities and resources for data information literacy at small universities, requiring instructors to make the most of the time they have in the classroom. This article describes the creation of a collection of data management exercises, collectively called The Research Data Management Workbook, which supplement one-shot instruction and help students implement specific data management tasks. Methods: Exercises were developed using backward design and authentic assessment, with the goal of scaffolding data management implementation yet allowing for customization to research workflows. Exercises cover activities across data lifecycle and take the form of worksheets, checklists, and procedures. The exercises were collectively formatted as a book using the tool bookdown. Results: For a one-hour library session, students can work through one or two exercises during class and the instructor can refer to specific exercises for follow up on various data management topics. The exercises have also proved useful for consultation, as a researcher can develop an understanding of a way to address the data problem ahead of a more in-depth consultation. Conclusions: The workbook has been a useful supplement to limited data management instruction time at a small university. Further work needs to be done to quantify the efficacy of this form of data information literacy.
- Research Article
- 10.1057/s41599-025-05437-z
- Jul 9, 2025
- Humanities and Social Sciences Communications
- Jianbo Zhao + 9 more
Social media platforms, as the primary carriers of online rumor dissemination, enable users to gain profits from the platform through activities such as content creation, browsing, and sharing. However, the complexity of data rights and the attribution of responsibility hinders the comprehensive tracing of rumor propagation paths and the precise identification of data infringement subjects. By reusing 92 circulation processes from 13 data lifecycle models, this paper abstracts the circulation process of online rumor data elements, standardizes the “five rights separation” framework for data rights confirmation among ternary data subjects, and defines a Rights-and-Interests-Attributed Data Element. Through integration with PROV-O and ProVOC models, this paper constructs PROV-OCC—an ontological model for data with rights and interests provenance in rumor circulation—comprising 3 parent classes and 32 object properties. It implements a seven-element semantic representation combining W7 provenance technology and validates the model through ontological reasoning via knowledge graph representation of typical rumor cases, verifying its effectiveness in tracing data rights changes, infringement subjects, and propagation paths. The data provenance model supports the recovery and compensation of infringement-related profits, enabling the timely restoration of compromised trust and order for governments and platforms.
- Research Article
- 10.37547/tajet/volume07issue07-05
- Jul 7, 2025
- The American Journal of Engineering and Technology
- Vinod Kumar Enugala
This study explores the application of blockchain technology to enhance the integrity and reliability of concrete test logs in civil engineering projects. Traditional methods of recording and managing concrete test data are susceptible to tampering, errors, and loss, which can compromise structural safety and project outcomes. The proposed solution leverages cryptographic hashing and immutable distributed ledgers to securely timestamp each test entry, ensuring tamper-proof records with verifiable audit trails. The system integrates seamlessly with existing concrete testing workflows by capturing test data directly from devices, encrypting it, and submitting hashes to a blockchain network. Smart contracts automate verification processes, improving transparency and accountability. The study further evaluates the solution’s security performance, transaction efficiency, and usability through simulation and prototype testing. Results indicate significant improvements in data immutability, regulatory compliance, and long-term storage capabilities compared to traditional systems. However, challenges such as transaction latency, scalability, industry resistance, and data privacy require careful mitigation through hybrid blockchain models, targeted training, and regulatory engagement. Future directions include integration with Internet of Things (IoT) sensors for real-time monitoring, AI-driven predictive analytics, and interoperability with Building Information Modeling (BIM) systems. This blockchain-enabled approach promises to transform construction quality assurance by embedding security and transparency throughout the data lifecycle, fostering safer, more accountable, and digitally advanced civil engineering practices.
- Research Article
- 10.38124/ijisrt/24aug1034
- Jul 7, 2025
- International Journal of Innovative Science and Research Technology
- L Chingwaru + 2 more
In an era of rapidly growing data, effective data lifecycle management has become crucial for organizations. This paper addresses the challenge of identifying and classifying data columns as either demographic or transactional across various systems, where column names may differ significantly (e.g., "Sex" in one system and "Gender" in another). The purpose of this research is to develop a model that can accurately classify these data columns, enabling automated data retention and destruction processes. The proposed model leverages intelligent process automation and process mining to identify and categorize data, allowing transactional data to be archived automatically after a specified timeframe. By implementing this model, organizations can improve their data management efficiency, ensuring compliance with data retention policies while optimizing storage use.
- Research Article
- 10.21827/ejlw.14.42513
- Jul 4, 2025
- European Journal of Life Writing
- Emiliano Degl'Innocenti + 2 more
This contribution explores how the work carried out by the DARIAH-IT research team at the Italian Consiglio Nazionale delle Ricerche, Istituto Opera del Vocabolario Italiano (OVI-CNR), supports biographical research on historical figures by developing an interoperable digital ecosystem for humanities and cultural heritage research. In this context, the RESTORE project was created with the aim of supporting biographical as well as philological and linguistic research on Francesco di Marco Datini, a merchant from Prato (Italy), who established a successful commercial network across Europe during the fourteenth century. RESTORE has gathered several datasets from multiple sources, including letters and other auto/biographical documents (both images and transcriptions), archival and accounting records, catalogs, and digital representations of artworks commissioned or owned by Datini and his family. Subsequently, a platform has been set up that brings together digital resources from several cultural institutions, allowing researchers to track relationships and connections among the data gathered from these materials in a single integrated semantic knowledge base aggregating Linked Open Data. This enables researchers to take a broad prosopographic approach, while also facilitating an in-depth exploration of specific aspects, such as welfare and religion. The platform also enables users to study the everyday life of historical figures by analyzing, for instance, the correspondence between Francesco Datini and his wife, Margherita, and other members of the family or collaborators. The project provides access to several resource types, including images and transcriptions, which offer different levels of detail of information. Additionally, the RESTORE research team has addressed the challenges faced by cultural institutions in managing the data lifecycle of digital resources, aiming to build a FAIR (Findable, Accessible, Interoperable, Reusable) knowledge base enriched with scholarly information. This approach enhances research opportunities and enables integrated storytelling across diverse data collections. The models and solutions developed during this process are designed to be replicable by other institutions, ensuring the project’s long-term sustainability.
- Research Article
- 10.20396/rdbci.v23i00.8676788
- Jul 3, 2025
- RDBCI: Revista Digital de Biblioteconomia e Ciência da Informação
- Gustavo Camossi + 2 more
Introduction: In the digital era, Search Engine Optimization (SEO) techniques are essential to ensure the visibility and relevance of online content. With the exponential increase in data shared on the internet, managing the Data Life Cycle has become crucial. This study explores how each stage of the Data Life Cycle – from collection, through storage and retrieval, to data disposal – can impact the practices and effectiveness of SEO techniques. Objective: The objective is to investigate the relationship between the Data Life Cycle and SEO techniques and practices, aiming to understand how each stage of this cycle, along with the associated cross-cutting factors, allows for the analysis of the effectiveness and efficiency of search engine optimization strategies. Methodology: The study adopts a descriptive methodological approach focused on SEO, discussing data access by optimization experts within the context of search engines. Results: The results present the connections between the phases of collection, storage, retrieval, and disposal through the cross-cutting factors present in all phases of the Data Life Cycle with SEO techniques. Conclusion: Finally, the strategic integration of the Data Life Cycle and SEO techniques is essential for successfully navigating the digital environment. By considering each phase of the Data Life Cycle through the lens of Search Engine Optimization, organizations can not only improve their online presence but also ensure that their content is valuable, accessible, and compliant with ethical and legal expectations.
- Research Article
- 10.2118/0725-0007-jpt
- Jul 1, 2025
- Journal of Petroleum Technology
- Annorah Lewis + 3 more
_ Saltwater disposal (SWD) well operations are energy-intensive and typically lack continuous performance visibility. This article presents a results-driven case study from an ongoing collaboration between a midstream oil and gas company and Neuralix Inc., in which artificial intelligence (AI) and first principles-based time series analytics were used to deliver significant operational cost reductions. Neuralix's AI system delivered up to a 40% electricity savings across a subset of injection sites in Phase 1, with projected annualized improvements exceeding 40% once scaled. This work demonstrates how interpretability, domain specificity, and first principles thinking can unlock actionable value from complex supervisory control and data acquisition (SCADA) environments. Introduction SWD is a cornerstone of produced-water management in oil and gas operations. However, it comes with significant power costs due to high-pressure injection pumping. As a midstream operator managing several sites across Oklahoma and Texas, the client operator aimed to reduce electricity usage while improving operational oversight. Neuralix achieved this through the deployment of an AI-powered key performance indicator (KPI)-monitoring and optimization system. The collaboration sought to - Reduce kWh/bbl and cost/bbl - Identify underperforming pump configurations. - Deliver transparent, actionable insights to engineers and field teams. First Principles Approach Neuralix’s approach at solving complex operational challenges is deeply rooted in first principles thinking. Instead of relying on opaque, "black box" AI models, Neuralix breaks each challenge down into its fundamental components—physics, chemistry, and operational constraints—and builds solutions from the ground up. In the case study project this meant - Deconstructing pump energy inefficiencies to their core thermodynamic and hydraulic causes. - Designing interpretable analytics around kWh/bbl, $/bbl, and flow rate as governing KPIs. This method allowed the solution developer to pinpoint why certain pumps consume more power per bbl and what operational conditions (e.g., abrubt changes, poor filter quality, suboptimal frequencies) are driving that behavior. It is particularly critical in SWD disposal operations where SCADA data is noisy, multivariate, and often lacks labels. Technical Implementation, Data Ingestion, and Structuring Neuralix’s proprietary Data Lifecycle Templatization (DLT) system standardized ingestion of time series data from diverse SCADA systems. Core parameters included: - Motor frequency (Hz) - Flow rate (B/D) - Voltage and amperage - Pressure readings - kWh pricing integration
- Research Article
1
- 10.1109/tnnls.2024.3462723
- Jul 1, 2025
- IEEE transactions on neural networks and learning systems
- Jianghong Zhou + 1 more
The rotating machinery is continuously monitored in practical application. However, the historical life-cycle data cannot be always preserved due to the limited storage resource; meanwhile, the on-site computing platform cannot process a large number of monitoring samples. It brings a great challenge for the remaining useful life (RUL) prediction. Thus, continuous learning (CL) is introduced into RUL prediction model for achieving its knowledge accumulation and dynamic update. To improve the performance of continuous RUL prediction, this article presents a new RUL prediction methodology with a multistage attention convolutional neural network (MSACNN) and knowledge weight constraint (KWC). First, an improved multihead full-channel sight self-attention (MFCSSA) mechanism is proposed to capture the global degradation information across all channels. MSACNN is then constructed by embedding MFCSSA, squeeze-and-excitation (SE) mechanism, and convolutional block attention module (CBAM) into different stages of feature extraction, which enables it to capture the global degradation information and refine the feature representations progressively. The KWC mechanism based on the importance of weight parameters and gradient information is proposed and integrated into MSACNN to achieve the continuous RUL prediction task. The proposed KWC can effectively alleviate catastrophic forgetting in CL. Finally, the experimental results on the life-cycle bearing and gear datasets demonstrate that MSACNN has a higher accuracy than the existing prediction methods. Moreover, the KWC mechanism performs better than typical CL methods in retaining the previously learned knowledge while acquiring the new task knowledge. Therefore, the proposed methodology can be better applied to the continuous RUL prediction tasks than the advanced methods of the same kind.
- Research Article
- 10.22214/ijraset.2025.71988
- Jun 30, 2025
- International Journal for Research in Applied Science and Engineering Technology
- Janakiraman S
Junk files, including outdated backups, redundant document versions, and orphaned objects, accumulate in cloud storage, leading to inefficiencies in data retrieval, increased latency, and higher storage costs. As cloud applications grow in scale, managing and optimizing storage resources becomes crucial for maintaining performance and reducing operational overhead. The problem of unnecessary files taking up valuable space is especially critical in cloud environments where efficient resource management is essential for smooth operations. This project proposes a solution to optimize cloud data management by integrating automated cleanup, structured data lifecycle management, and advanced deduplication techniques. Regex algorithms will drive the cleanup process, identifying and eliminating obsolete files regularly to ensure that only relevant data is stored. Additionally, the Data Life Cycle Guard Scheme provides a framework for managing data according to predefined compliance rules, improving overall data governance and integrity. These measures aim to streamline data processes and maintain the efficiency of cloud applications. Fuzzy Matching techniques will further enhance the deduplication process, improving accuracy in identifying and removing duplicate files, thus optimizing storage space. By automating the identification of unnecessary files and improving data lifecycle management, this system helps reduce storage costs, minimize latency, and ensure that cloud applications run more efficiently. The solution is designed to set new standards in cloud data management, optimizing resource utilization and ensuring long-term sustainability for cloud-based environments
- Research Article
- 10.1371/journal.pone.0317070
- Jun 27, 2025
- PloS one
- Bradley Wade Bishop + 4 more
Physical collections provide the tangible objects that when analyzed become data informing all sciences. Physical collection managers aim to make physical objects discoverable, accessible, and reusable. The volume and variety of physical collections acquired, described, and stored across decades, and in some cases centuries, results from large public and private investments. The purpose of this study is to understand the curation perceptions and behaviors of physical collection managers across domains to inform cross-disciplinary research data management. Ten focus groups were conducted with thirty-two participants across several physical collection communities. Participants responded to open-ended questions about the data lifecycle of their physical objects. Results indicated that physical collections attempt to use universal metadata and data storage standards to increase discoverability, but interdisciplinary physical collections and derived data reuse require more investments to increase reusability of these invaluable items. This study concludes with a domain-agnostic discussion of the results to inform investment in cyberinfrastructure tools and services.
- Research Article
- 10.1371/journal.pone.0317070.r004
- Jun 27, 2025
- PLOS One
- Bradley Wade Bishop + 5 more
Physical collections provide the tangible objects that when analyzed become data informing all sciences. Physical collection managers aim to make physical objects discoverable, accessible, and reusable. The volume and variety of physical collections acquired, described, and stored across decades, and in some cases centuries, results from large public and private investments. The purpose of this study is to understand the curation perceptions and behaviors of physical collection managers across domains to inform cross-disciplinary research data management. Ten focus groups were conducted with thirty-two participants across several physical collection communities. Participants responded to open-ended questions about the data lifecycle of their physical objects. Results indicated that physical collections attempt to use universal metadata and data storage standards to increase discoverability, but interdisciplinary physical collections and derived data reuse require more investments to increase reusability of these invaluable items. This study concludes with a domain-agnostic discussion of the results to inform investment in cyberinfrastructure tools and services.
- Research Article
- 10.1088/1361-6501/ade552
- Jun 26, 2025
- Measurement Science and Technology
- Wei Zhang + 3 more
Abstract Collaborative model training with multiple clients is becoming an effective solution for prognostic problems, due to the scarcity of the machine run-to-failure data in the real industries. However, direct data sharing and centralized learning are usually not feasible in practice, since the private local data basically cannot be exposed to the other commercial clients. Furthermore, the machines at different clients mostly have different degradation patterns and failure modes, resulting in different data distributions. That poses great challenges for data-driven knowledge transfer across clients with data privacy. To address these issues, this paper proposes a federated transfer learning method for remaining useful life predictions. The proposed prior alignment and feature adaptation schemes can achieve extraction of shared features across domains without simultaneous processing of the source and target data. The availability of the target-domain data in the whole life cycle is not required by the proposed method, which enhances the model applicability. Experiments on prognostic datasets are carried out for validations, and the results suggest the proposed method is promising for the federated transfer learning problems in the real industries.
- Research Article
- 10.3390/su17135818
- Jun 24, 2025
- Sustainability
- Dominika Siwiec + 1 more
Sustainable development requires manufacturers to deliver products that are not only of good quality but also environmentally friendly. The materials used play a crucial role in product manufacturing. They not only directly determine quality but also influence the environment throughout the product’s life cycle. Therefore, the aim was to develop an innovative approach based on the modelling of the relationship between materials and product quality within the framework of the sustainable design of alternative product solutions. The model framework includes selected elements of Quality Functional Development (QFD) and the first stage of life cycle assessment (LCA), namely, material acquisition and extraction. Its novelty lies in supporting the modelling process with life cycle data on materials characterised by their environmental burden. This modelling is based on their potential negative environment impact. Using this foundation, it becomes possible to consider alternative design solutions in terms of both quality (i.e., fulfilling customer satisfaction during use) and environmental performance (i.e., reducing the negative impact throughout the life cycle). The proposed modelling process was also tested, demonstrating its effectiveness in the material analysis of products. The solution can be applied to any material, and, with minor modifications, adapted to various product types.
- Research Article
- 10.55640/ijns-05-01-11
- Jun 19, 2025
- International journal of networks and security
- Shriprakashan L Parapalli
In pharmaceutical manufacturing, compliance with 21 CFR Part 11 is critical for ensuring the integrity of electronic records and signatures within Manufacturing Execution Systems (MES). This paper proposes a comprehensive framework for implementing electronic signatures and data integrity controls in MES, aligning with 21 CFR Parts 11, 210, 211, ICH Q7, and EudraLex Volume 4 Annex 11. The methodology includes system design, user access controls, audit trails, and data lifecycle management, validated through risk-based assessments. Key findings demonstrate that tailored electronic signature configurations (none, single, or double) based on process criticality reduce compliance risks while enhancing operational efficiency. Automated data capture and true-copy transmission further ensure data integrity. Challenges such as manual data entry and generic account usage are addressed through procedural and technical controls. This study underscores the importance of data integrity by design, offering practical guidance for pharmaceutical manufacturers to achieve regulatory compliance and safeguard patient safety.
- Research Article
- 10.15244/peai/205067
- Jun 18, 2025
- Polish Journal of Environmental Studies: Politics, Economics and Industry
- Han Zhang + 2 more
With the rapid development of metaverse technology, data security governance within its innovation ecosystem has become a critical challenge. This study explores data security collaborative governance decision-making aimed at maximizing energy efficiency and minimizing environmental impact throughout the data lifecycle. It proposes an innovative model for synergistic data security governance and green sustainability, constructing an efficient, secure, and sustainable governance framework. This provides theoretical support and practical guidance for the long-term healthy development of the metaverse ecosystem. Against the backdrop of the metaverse, this paper constructs three game models – Nash non-cooperative, Stackelberg leader-follower, and collaborative cooperative – based on complex systems theory, collaborative governance theory, and differential game theory. From a dynamic perspective, it examines the data security collaborative governance decision-making issues among three key entities: core enterprises, research institutions, and the government. Finally, numerical simulation analysis is conducted. The research findings reveal the following: (1) Government policy support and innovation subsidies can enhance the willingness of core enterprises and research institutions to engage in collaborative governance. Under government incentives and subsidies, the optimal benefits for participating entities and the overall benefits of the ecosystem are improved. (2) The three game mechanisms have heterogeneous effects on improving collaborative governance levels. When the initial level of collaborative governance is low, all three mechanisms can drive its improvement. As the level of collaborative governance increases, the leader-follower game under government incentives promotes better collaborative governance outcomes in the innovation ecosystem. When the level of collaborative governance is very high, only the collaborative cooperation mechanism can further enhance it. (3) Strategies in the cooperative game not only involve optimal decision analysis but also emphasize the promotion of ecosystem integration and optimization through synergistic mechanisms to achieve whole-process, dynamic data security governance, while promoting efficient resource utilization and environmental sustainability, and building a synergistic governance model between data security and green development.
- Research Article
- 10.37745/ejcsit.2013/vol13n49121
- Jun 15, 2025
- European Journal of Computer Science and Information Technology
- Karthik Ravva
The rapid evolution of AI-powered Business Intelligence (BI) solutions demands robust data governance frameworks that span the entire data lifecycle in cloud environments. Organizations face intensifying regulatory pressures, particularly from GDPR requirements concerning data erasure and storage limitations. The successful implementation of data governance requires integrated solutions addressing ownership, classification, ingestion, storage, and retention management. Through cloud-native tools and automated processes, enterprises can achieve both regulatory compliance and operational efficiency. The adoption of sophisticated data lifecycle management strategies, leveraging advanced capabilities from major cloud providers, enables organizations to maintain control over their data assets while supporting innovative AI-BI implementations. The integration of automated classification systems, intelligent storage management, and comprehensive audit mechanisms provides organizations with the necessary foundation to address evolving regulatory requirements while maximizing the value of their data assets. These frameworks enable seamless adaptation to changing compliance landscapes, ensuring sustainable growth and innovation in AI-powered business intelligence solutions.
- Research Article
- 10.1007/s11367-025-02482-3
- Jun 11, 2025
- The International Journal of Life Cycle Assessment
- Vincenzo Santucci + 3 more
Enhancing life cycle thinking in emerging sectors: the example of hydrogen technologies and the opportunities of the Life Cycle Data Network
- Research Article
- 10.55041/ijsrem49547
- Jun 10, 2025
- INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
- Mr R Ramakrishnan
ABSTRACT-Junk files, including outdated backups, redundant document versions, and orphaned objects, accumulate in cloud storage, leading to inefficiencies in data retrieval, increased latency, and higher storage costs. As cloud applications grow in scale, managing and optimizing storage resources becomes crucial for maintaining performance and reducing operational overhead. The problem of unnecessary files taking up valuable space is especially critical in cloud environments where efficient resource management is essential for smooth operations. This project proposes a solution to optimize cloud data management by integrating automated cleanup, structured data lifecycle management, and advanced deduplication techniques. Regex algorithms will drive the cleanup process, identifying and eliminating obsolete files regularly to ensure that only relevant data is stored. Additionally, the Data Life Cycle Guard Scheme provides a framework for managing data according to predefined compliance rules, improving overall data governance and integrity. These measures aim to streamline data processes and maintain the efficiency of cloud applications. Fuzzy Matching techniques will further enhance the deduplication process, improving accuracy in identifying and removing duplicate files, thus optimizing storage space. By automating the identification of unnecessary files and improving data lifecycle management, this system helps reduce storage costs, minimize latency, and ensure that cloud applications run more efficiently. The solution is designed to set new standards in cloud data management, optimizing resource utilization and ensuring long-term sustainability for cloud-based environments. Keywords-Cloud Storage Optimization, Junk File Removal, Automated Cleanup, Data Lifecycle Management, De-duplication, Fuzzy Matching, Regex Algorithm, Data Governance, Storage Efficiency, Resource Utilization, Cloud Performance, Latency Reduction, Obsolete File Detection, Cloud Cost Optimization, Redundant Data Elimination, Data Integrity, Structured Data Management, Cloud Resource Management, Data Cleanup Automation, File Metadata Analysis.
- Research Article
- 10.1093/comjnl/bxaf031
- Jun 9, 2025
- The Computer Journal
- Enting Guo + 2 more
Abstract Machine unlearning in the context of cybersecurity and privacy protection facilitates the removal of specific training data impacts from deep learning (DL) models, adhering to security, privacy, or compliance demands. However, traditional methods can only handle short-term, independent unlearning tasks. Conversely, real-world scenarios often involve extensive unlearning demands from users. Current methods fail to adequately address these demands due to substantial computational overhead and adverse impacts on inference accuracy, leaving the security and privacy of many users at risk. To navigate these challenges adeptly, we introduce the Multi-Agent Reinforcement Learning Data Lifecycle Management (MADLM) strategy. MADLM intricately examines the interactions between unlearning and continuous learning processes, enabling the postponement of certain tasks for combined execution to optimize computational resources. Concurrently, it employs strategic data management to maintain and enhance inference accuracy. Furthermore, by utilizing Multi-Agent Reinforcement Learning (MARL), MADLM dynamically orchestrates task scheduling to minimize computational demands, improve task response times, and bolster inference reliability, crucial for upholding stringent cybersecurity and privacy standards. Our evaluations of MADLM reveal substantial enhancements, including a 6% uplift in inference accuracy and a dramatic reduction in computational overhead to merely 12% of the original demands, effectively expanding the data security protections.
- Research Article
- 10.3390/systems13060447
- Jun 6, 2025
- Systems
- Wei Zhang + 6 more
To address the challenges of data fragmentation, inconsistent standards, and weak interactivity in oil and gas field surface engineering, this study proposes an intelligent delivery system integrated with three-dimensional dynamic modeling. Utilizing a layered collaborative framework, the system combines optimization algorithms and anomaly detection methods during data processing to enhance the relevance and reliability of high-dimensional data. The model construction adopts a structured data architecture and dynamic governance strategies, supporting multi-project secure collaboration and full lifecycle data management. At the application level, it integrates three-dimensional visualization and semantic parsing capabilities to achieve interactive display and intelligent analysis of cross-modal data. Validated through practical engineering cases, the platform enables real-time linkage of equipment parameters, documentation, and three-dimensional models, significantly improving data integration efficiency and decision-making capabilities. This advancement drives the transformation of oil and gas field engineering toward intelligent and knowledge-driven practices.