Published in last 50 years
Articles published on Data Retrieval
- New
- Research Article
- 10.3847/1538-3881/ae019a
- Nov 5, 2025
- The Astronomical Journal
- Stephen P Schmidt + 15 more
Abstract Sub-Neptunes are the most common type of planet in our galaxy. Interior structure models suggest that the coldest sub-Neptunes could host liquid water oceans underneath their hydrogen envelopes—sometimes called “hycean” planets. JWST transmission spectra of the ∼250 K sub-Neptune K2-18 b were recently used to report detections of CH 4 and CO 2 , alongside weaker evidence of (CH 3 ) 2 S (dimethyl sulfide, or DMS). Atmospheric CO 2 was interpreted as evidence for a liquid water ocean, while DMS was highlighted as a potential biomarker. However, these notable claims were derived using a single data reduction and retrieval modeling framework, which did not allow for standard robustness tests. Here, we present a comprehensive reanalysis of K2-18 b’s JWST NIRISS SOSS and NIRSpec G395H transmission spectra, including the first analysis of the second-order NIRISS SOSS data. We incorporate multiple well-tested data reduction pipelines and retrieval codes, spanning 60 different data treatments and over 250 atmospheric retrievals. We confirm the detection of CH 4 (≈4 σ ), with a volume mixing ratio range − 2.14 ≤ log 10 CH 4 ≤ − 0.53 , but we find no statistically significant or reliable evidence for CO 2 or DMS. Finally, we assess the retrieved atmospheric composition using photochemical-climate and interior models, demonstrating that our revised composition of K2-18 b can be explained by an oxygen-poor mini-Neptune without requiring a liquid water surface or life.
- New
- Research Article
- 10.34148/teknika.v14i3.1289
- Nov 3, 2025
- Teknika
- Gabriella Youzanna Rorong + 2 more
The escalating volume and often irregular structure of social assistance data pose significant challenges for efficient data retrieval in management systems. Traditional search algorithms, such as linear and binary search, frequently encounter limitations when handling these large-scale datasets. This research conducts a comparative study between two hybrid algorithms, Jump Binary Search (JBS) and Interpolation Extrapolation Search (IES), aiming to identify the most effective method for a web-based social assistance data management system. Evaluations were performed on a dataset comprising 480 names of social assistance recipients, measuring the number of iterations, execution time, and search accuracy. The results demonstrate IES's superiority over JBS in both iteration efficiency and execution speed. IES exhibited an execution time ranging from 0.002 to 0.006 ms, whereas JBS had an execution time ranging from 0.015 to 0.039 ms. Based on these findings, IES was successfully implemented into a Laravel-based application utilizing a MySQL database. This system is capable of executing searches in less than one second per request. This implementation significantly enhances the system's adaptability and provides an effective search solution for dynamic, large-scale data environments, offering rapid and efficient access to data.
- New
- Research Article
- 10.3390/bdcc9110275
- Nov 1, 2025
- Big Data and Cognitive Computing
- Trung Tin Nguyen + 1 more
In this study, we present LizAI XT, an AI-powered platform designed to automate the structuring, anonymization, and semantic integration of large-scale healthcare data from diverse sources, into one comprehensive table or any designated forms, based on diseases, clinical variables, and/or other defined parameters, beyond the creation of a dashboard or visualization. We evaluate the platform’s performance on a cluster of 4x NVIDIA A30 GPU 24GB, with 16 diseases—from deathly cancer and COPD, to conventional ones—ear infections, including a total 16,000 patients, ∼115,000 medical files, and ∼800 clinical variables. LizAI XT structures data from thousands of files into sets of variables for each disease in one file, achieving > 95.0% overall accuracy, while providing exceptional outputs in complicated cases of cancers (99.1%), COPD (98.89%), and asthma (98.12%), without model-overfitting. Data retrieval is sub-second for a variable per patient with a minimal GPU power, which can significantly be improved on more powerful GPUs. LizAI XT uniquely enables fully client-controlled data, complying with strict data security and privacy regulations per region/nation. Our advances complement the existing EMR/EHR, AWS HealthLake, and Google Vertex AI platforms, for healthcare data management and AI development, with large-scalability and expansion at any levels of HMOs, clinics, pharma, and government.
- New
- Research Article
- 10.11591/ijres.v14.i3.pp597-604
- Nov 1, 2025
- International Journal of Reconfigurable and Embedded Systems (IJRES)
- Moulai Khatir Ahmed Nassim + 1 more
In the last thirty years, low power field programmable gate arrays (FPGAs) becoming more commonly used to implement a countless of applications in different electronics industry domains. Due to their flexible design, strong compatibility, parallel computing, and compared to the CPU architecture, FPGA accentuate computing efficiency and con sidered as one of the devices with the lowest application risk and the shortest development cycle among the variety of available programmable circuits families. This article details the design and implementation of a direct digital synthesis (DDS) signal generator using the Spartan-6 FPGA, focusing on high-quality sine wave generation. The system utilizes look-up tables (LUTs) and Block RAM (BRAM) for efficient storage and retrieval of sine wave data, while an 8-bit DAC0808 digital-to-analog converter (DAC) ensures precise waveform output. The FPGA's reconfigurable architecture allows real-time adjustments of frequency and phase, making the design suitable for various signal processing applications and modulation techniques like binary phase shift keying (BPSK).
- New
- Research Article
- 10.24815/jr.v8i4.50002
- Oct 30, 2025
- Riwayat: Educational Journal of History and Humanities
- Abdi Ilyas + 2 more
This research, titled "DEVELOPMENT OF A WEB-BASED CLINIC INFORMATION SERVICE SYSTEM APPLICATION USING PHP AND MySQL" aims to optimize efficiency and service quality at Atlantic Clinic through the implementation of a web-based information system. Analysis and testing indicate that the current conventional service method is no longer adequate, leading to inefficiencies in service processes. The newly designed system enhances administrative efficiency by accelerating patient data recording and retrieval. Additionally, the queue system provides better time estimates for patients, while the medical record system ensures the security and accuracy of patient data. The integrated cashier system improves payment transaction efficiency, reducing billing errors. Master data management, which includes medication, medical procedures, doctors, and users, becomes more structured and accurate. The implementation of this system is expected to bring significant improvements to the quality of service at Atlantic Clinic, making it more modern, effective, and efficient.
- New
- Research Article
- 10.55041/ijsrem53277
- Oct 29, 2025
- INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
- Greeshma B + 1 more
ABSTRACT The paper present StockInsight, an end-to-end platform for interactive stock trend analysis and forecasting using Long Short-Term Memory (LSTM) deep learning. StockInsight integrates historical price data retrieval, preprocessing, model training, and web-based visualization. Historical stock data (Open, High, Low, Close, Volume) are obtained via the Yahoo Finance API for multiple equities over 10–15 years. Data cleaning and feature engineering (e.g., moving averages, sequence windows) are performed in Python using Pandas/Numpy. The forecasting core is a multivariate LSTM recurrent neural network with two LSTM layers followed by dense output layers, trained on 100-day rolling windows to predict next-day closing prices. evaluate the model using standard regression metrics (RMSE, MAE) and find it achieves substantial accuracy on test stocks. The web application (Flask-based) provides interactive charts of actual v/s predicted prices, forecast tables for the next 10 days, and trend visualizations. In experiments with tech-sector stocks, StockInsight’s LSTM forecasts closely track real price movements and improve on baseline ARIMA-like performance. Sample outputs include time-series plots of predictions v/s actual prices and tabulated multi-day forecasts. The system’s design, predictions, and user interface are discussed in detail, along with evaluation across varying market conditions, highlighting implications for retail traders. Keywords: Stock market forecasting, LSTM, time-series prediction, deep learning, data visualization, Yahoo Finance API, web analytics.
- New
- Research Article
- 10.5121/ijwest.2025.16401
- Oct 28, 2025
- International journal of Web & Semantic Technology
- Md Hasan Hafizur Rahman + 1 more
Geographic data plays a vital role in supporting modern location-based services on the World Wide Web (WWW). In Bangladesh, such data exists in structured, semi-structured, and unstructured formats, stored across government agencies, research institutions, and private organizations using diverse formats and protocols. This heterogeneity creates significant integration challenges, limiting operational efficiency and the development of applications ranging from navigation and logistics to personalized services and emergency response. Our research addresses this by transforming and integrating disparate datasets into a machine-understandable form. We modeled the complete administrative hierarchy of Bangladesh, from divisions to villages, generating 0.40 million RDF (Resource Description Framework) triples within a unified semantic repository, Geo-Bangladesh. This repository enables effortless integration and retrieval of geospatial data across all administrative levels. We further linked Geo-Bangladesh with related repositories, including the educational institutions and citizen information, enabling the mapping and visualization of entities along with locations. Using geoSPARQL, we retrieved and inferred spatial and non-spatial data, demonstrating the repository’s usability, interoperability, and effectiveness. Unlike raw GeoSPARQL implementations or general-purpose ontologies such as Geonames, Geo-Bangladesh is explicitly tailored to Bangladesh’s administrative structure and reconciles inconsistencies such as the 68 vs. 64 districts problem. Compared to OGC-compliant frameworks, our repository incorporates both semantic interoperability and localized reconciliation, demonstrating advantages in accuracy, query flexibility, and alignment with national data sources.
- New
- Research Article
- 10.1051/0004-6361/202556838
- Oct 28, 2025
- Astronomy & Astrophysics
- D Álvarez-Ortega + 4 more
Very long baseline interferometry (VLBI) is a powerful observational technique that can achieve sub-milliarcsecond resolution. However, it requires complex and often manual post-correlation calibration to correct for instrumental, geometric, and propagation-related errors. Unlike connected-element interferometers, VLBI arrays typically deliver raw visibilities rather than science-ready data, and existing pipelines are largely semi-automated and reliant on user supervision. We aim to develop and validate a fully automated, end-to-end calibration pipeline for continuum VLBI data that operates without human intervention or prior knowledge of the dataset. The pipeline must be scalable to thousands of sources and suitable for heterogeneous archival observations, as required by initiatives such as the Search for Milli-Lenses (SMILE) project. We present the VLBI Pipeline for automated data Calibration using AIPS or VIPCALs . Implemented in Python using ParselTongue VIPCALs reproduces the standard AIPS calibration workflow in a fully unsupervised mode. The pipeline carries out data import, retrieval of system temperature and gain curve data, ionospheric and geometric corrections, fringe fitting, and amplitude and bandpass calibration steps. VIPCALs performs automatic reference antenna selection and calibrator identification, and it generates diagnostic outputs for inspection. It can be easily used through a simple graphical user interface. We validated VIPCALs on a representative sample of Very Long Baseline Array (VLBA) data corresponding to 1000 sources from the SMILE project. VIPCALs successfully calibrated observations of 955 of the 1000 test sources across multiple frequency bands. Over 91% of the calibrated datasets achieved successful fringe fitting on target in at least half of the solutions attempted. The median ratio of calibrated visibilities to initial total visibilities was 0.87. The average processing time was below 10 minutes per dataset when using a single-core configuration, demonstrating both efficiency and scalability. VIPCALs enables robust, reproducible, and fully automated calibration of VLBI continuum data, significantly lowering the entry barrier for VLBI science and making large-scale projects such as SMILE feasible.
- New
- Research Article
- 10.1177/18724981251388888
- Oct 28, 2025
- Intelligent Decision Technologies
- Dimitrios Karapiperis + 3 more
The discipline of Entity Resolution (ER), the process of identifying and linking records that refer to the same real-world entity, has been fundamentally reshaped by the adoption of high-dimensional vector embeddings. This transformation reframes ER as a large-scale Approximate Nearest Neighbor Search (ANNS) problem, making the choice of ANNS architecture a critical determinant of system performance. This paper provides a deep architectural comparison and a novel, large-scale empirical evaluation of the two dominant ANNS paradigms: graph-based methods (HNSW, DiskANN) and partition-based methods (Faiss-IVF+PQ, Scann). We introduce a new semi-synthetic benchmark tailored to the ER task, consisting of two one-million-vector datasets with a known ground truth. On this benchmark, we conduct a comprehensive evaluation, measuring not only total query time but also disaggregated blocking and matching times, alongside canonical ER quality metrics: precision, recall, and F1-score. Our findings reveal that partition-based methods, particularly Scann, offer superior performance in high-throughput, moderate-recall scenarios, while graph-based methods like HNSW and DiskANN are unequivocally superior for applications demanding the highest levels of matching quality. This work provides a nuanced, application-centric analysis that culminates in a set of actionable recommendations for practitioners designing modern data integration and retrieval systems.
- New
- Research Article
- 10.5194/nhess-25-4203-2025
- Oct 28, 2025
- Natural Hazards and Earth System Sciences
- Rónadh Cox + 16 more
Abstract. Coastal boulder deposits provide vital information on extreme wave events. They are crucial for understanding storm and tsunami impacts on rocky coasts, and for understanding long-term hazard histories. But study of these deposits is still a young field, and growth in investigation has been rapid, without much contact between research groups. Therefore, inconsistencies in field data collection among different studies hinder cross-site comparisons and limit the applicability of findings across disciplines. This paper analyses field methodologies for coastal boulder deposit measurement based using an integrated database (ISROC-DB), and demonstrates inconsistencies in current approaches. We use the analysis as a basis for outlining protocols to improve data comparability and utility for geoscientists, engineers, and coastal planners. Using standardised and comprehensive data reporting with due attention to precision and reproducibility – including site characteristics, boulder dimensions, complete positional data, tide characteristics, and geodetic and local topographic datum information – will help ensure complete data retrieval in the field. Applying these approaches will further ensure that data collected at different times and/or locations, and by different groups, is useful not just for the study being undertaken, but for other researchers to analyse and reuse. We hope to foster development of the large, internally consistent datasets that are the basis for fruitful meta-analysis. This is particularly important given increasing focus on long-term monitoring of coastal change. By recommending a common set of measurements, adaptable to available equipment and personnel, this work aims to support accurate and thorough coastal boulder deposit documentation, enabling broader applicability and future-proofed datasets. Field protocols described and recommended here also apply as best practices for coastal geomorphology field work in general.
- New
- Research Article
- 10.1021/acs.est.5c05955
- Oct 28, 2025
- Environmental science & technology
- Avan Kumar + 4 more
Life cycle assessment (LCA) quantifies environmental impacts from raw material extraction to end-of-life (EoL) treatment, yet its accuracy depends on reliable life cycle inventory (LCI) data. However, obtaining such data is time-consuming and requires an extensive literature review or access to databases that are often behind paywalls that hinder transparent research. This study introduces a systematic framework leveraging a retrained large language model (LLM) to assist LCA practitioners in retrieving LCI data and insightful information about their environmental impact. The framework follows a three-stage process: (i) a fine-tuned classification model identifies relevant documents, (ii) the LLaMA-2-7B model is pretrained on selected texts to inject domain knowledge into its database, and (iii) a fine-tuned Q&A model extracts LCI and environmental impact data from the scientific literature. The resulting LLM is termed as "Sustain-LLaMA". We implement this framework in two cases: methanol production and plastic packaging EoL treatment. After retraining, the classification models achieve high accuracies (0.850 for methanol, 0.952 for plastic packaging) for unseen data, which means effectively distinguishing relevant studies. The Q&A models with Retrieval Augmentated Generation (RAG) yield F1 scores of 0.823 for methanol and 0.855 for plastic studies. The Q&A models' performances are validated against the version of LLaMA-2-7B without retraining, ChatGPT-4o, and the USLCI database, demonstrating comparable or superior accuracy and efficiency. This framework enhances scalability and precision by automating LCI data retrieval, offering a promising tool for guiding the chemical and plastic industries toward sustainability.
- New
- Research Article
- 10.1002/hsr2.71415
- Oct 28, 2025
- Health Science Reports
- Binbin Yu + 6 more
ABSTRACTBackground and AimsEpstein–Barr virus (EBV) has been implicated in autoimmune diseases (AIDs), yet a comprehensive analysis of global research trends, knowledge gaps, and translational opportunities remains lacking. Therefore, we aimed to study the research output of EBV‐associated AIDs globally.MethodsAll publications related to EBV‐associated AIDs from 1993 to 2023 were collected from the Science Citation Index‐Expanded of Web of Science. Subsequently, the data were evaluated using the bibliometric methodology. Bibliometrix package in R software was used for data retrieval. VOSviewer and CiteSpace were used to visualize the research focus and trend regarding the effect of EBV‐associated AIDs.ResultsWe analyzed 1589 publications to explore the global scientific landscape on EBV‐associated AIDs. Growth in publications exhibited two peaks, with post‐2020 acceleration coinciding with increased interest in EBV's immunological role. The USA exhibited the highest publications with 543 publications, many of which investigated molecular pathways such as lipid metabolism in EBV‐associated AIDs. Then, Italy (n = 161) and Japan (n = 140) took the second and third places, respectively. Among the institutions involved, Tel Aviv University provided the biggest nodes in each cluster of the cooperation network. The most frequently cited author in the field, according to our results, was Shoenfeld Y. Finally, the results of keyword co‐occurrence analysis showed that systemic lupus erythematosus and rheumatoid arthritis are the most extensively investigated topics in this study area.ConclusionThis study highlights pivotal milestones in EBV‐AIDs research and proposes future directions, including genetic–host immune system interaction, prevention trials, and collaborative mechanisms. Prioritizing these emerging hotspots could advance therapeutic strategies and interdisciplinary synergies.
- New
- Research Article
- 10.1093/bib/bbaf565
- Oct 28, 2025
- Briefings in Bioinformatics
- Gang Qu + 4 more
Polygenic risk scores (PRS) are widely used to assess genetic susceptibility in Alzheimer’s disease (AD) research. However, the rapid expansion of PRS studies has led to dataset-specific biases—stemming from factors like population makeup, genotyping methods, and analysis pipelines—that result in inconsistent variant prioritization and limit generalizability and reproducibility. To address these challenges, we propose a transductive learning framework that integrates multiple PRS datasets for more robust risk variant prioritization, incorporating genome-wide association study (GWAS) priority scores as biologically informed priors. Additionally, we introduce BrainGeneBot, an AI-driven tool leveraging generative pretrained transformers with retrieval-augmented generation technology to streamline genomic analyses in AD, including the STRING for protein interaction analysis, Enrichr for gene set enrichment, ClinVar for genetic variant interpretation, and Biopython for conducting literature searches. We apply our approach to publicly available AD datasets from the PGS Catalog and conduct further analyses to validate its efficacy. In parallel, we perform conventional unsupervised rank aggregation as a baseline. The transductive learning approach not only verifies high-risk variants identified by traditional methods but also reveals unique insights that better correlate with GWAS signals. Our framework streamlines data retrieval and interpretation, effectively prioritizing genetic variants in multiple PRS studies. Moreover, BrainGeneBot facilitates the discovery of biologically meaningful insights to enhance PRS interpretability and applicability in AD research, supporting the development of precise AD interventions and treatments. Our approach provides a robust framework for AD genetic research, improving data accessibility, accelerating discoveries, and refining genetic insights.
- New
- Research Article
- 10.1186/s12864-025-12148-x
- Oct 27, 2025
- BMC Genomics
- Angel Garza Reyna + 2 more
BackgroundZ-DNA is a left-handed DNA conformation with a zigzag backbone whose formation depends on base composition, modifications, and environmental factors. Although energetically unfavorable, Z-DNA has been implicated in both normal physiology and disease. The Z-Hunt algorithm predicts Z-DNA potential from thermodynamic principles, but its command-line interface and plain-text outputs limit adoption by users without coding expertise.ResultsWe introduce Z-GENIE, an R/Shiny GUI that automates Z-Hunt execution, parses its output, and presents interactive visualizations. Z-GENIE accepts FASTA files, NCBI accession IDs, or manual sequences and produces CSV and BED summaries compatible with genomic browsers. In benchmarks on small to medium genomes (< 20 Mb), Z-Hunt completes in minutes and the full Z-GENIE pipeline (data retrieval, parsing, visualization) finishes in under five minutes. For large genomes (> 50 Mb), Z-Hunt may require up to two hours, whereas Z-GENIE’s parsing and BED-file export take < 2 min. In a human ADAM12 case study, Z-GENIE reproduced a published Z-score (3.0 × 10^7) and uncovered orientation-dependent Z-DNA clusters. Another case study compared predictions for Z-DNA in the rice genome (Oryza sativa) with experimental ZIP-Seq and CUT&Tag data; this study highlights the complementarity between in silico and in vivo approaches.ConclusionsBy encapsulating Z-Hunt within an intuitive GUI and offering flexible inputs and downstream-ready outputs, Z-GENIE democratizes genome-wide Z-DNA analysis. Its rapid performance and advanced visualization features should broaden exploration of Z-DNA’s roles in health and disease.Supplementary InformationThe online version contains supplementary material available at 10.1186/s12864-025-12148-x.
- New
- Research Article
- 10.1111/cch.70171
- Oct 26, 2025
- Child: care, health and development
- Melike Tuğrul Aksakal + 8 more
Age-appropriate preventive care and continuous health management are essential for maintaining health in children and adolescents. This study aimed to investigate the effects of regular well-child follow-up from birth on physiological and psychosocial characteristics during adolescence. Adolescents aged 9-21 who presented to an Adolescent Health Outpatient Clinic (AHOC) for follow-up were stratified into two groups based on their longitudinal follow-up from birth until they commenced attendance at the AHOC. The first group consisted of adolescents whose child health follow-ups were conducted at the Well-Child Outpatient Clinic (WCOC) under the concept of social paediatrics (Group 1). The second group consisted of adolescents whose child health follow-ups were conducted by family physicians at family health centres but were not followed up at the WCOC (Group 2). A comparison of the groups was conducted using data recorded during their initial assessments at the AHOC retrospectively. These data included anthropometric measures, psychosocial assessments (using the HEEADSSS screening tool), immunisation status and laboratory findings. All data analyses were performed using IBM SPSS v.28, with a significance level set at p < 0.05. Group 1 comprised 51.5% (n = 138), while Group 2 comprised 48.5% (n = 130). The study revealed no statistically significant differences in terms of gender or parental sociodemographic characteristics. The average age at data retrieval was found to be 10.1 years in Group 1 and 11.5 years in Group 2. This indicated that Group 1 exhibited a significantly younger average age. Additionally, Group 2 demonstrated significantly higher weight- and BMI-based SDS and a higher prevalence of anaemia. Subsequent analysis revealed no statistically significant differences in lipid values or height SDS. Group 2 exhibited a higher prevalence of psychosocial risks, including risks related to the home environment, educational attainment, dietary habits and suicide risk. Structured regular child health follow-up from birth has been demonstrated to have a positive impact on adolescent health and well-being. This phenomenon persists irrespective of parental socioeconomic status, encompassing both physiological and psychosocial dimensions. However, observed variations may also be indicative of unmeasured parental health-seeking behaviours, health literacy and investment. Consequently, the interpretation of results should be approached with a degree of caution.
- New
- Research Article
- 10.1016/j.brachy.2025.08.004
- Oct 22, 2025
- Brachytherapy
- Ryan Truong + 6 more
Integration of single-click, AI-based brachytherapy auto-planning for cervical cancer within a treatment planning system.
- New
- Research Article
- 10.32028/groma-issue-9-2024-3238
- Oct 21, 2025
- GROMA: Documenting Archaeology
- Mattia Francesco Antonio Cantatore + 4 more
The Emilia-Romagna regional branch of the Italian Ministry of Culture (MiC) developed the ArcheoDB geodatabase to facilitate comprehensive and real-time mapping of archaeological sites and activities. The project, initiated in 2019 as part of a PhD research, is based exclusively on open-source technology; it has become the primary research tool for field archaeologists and Ministry officers. The regional Soprintendenze of the MiC actively participates in the retrieval of historical data stored in archives,which are progressively digitized. As of 2023, the cataloging process was mandatory for submitting new excavation documentation. This system is fully compatible with the Geoportale Nazionale per l’Archeologia (GNA - National Archaeological Geoportal) and collects regional data. It offers real-time updates and allows citizens, government entities, professional archaeologists, and researchers to access the open data collected through a web-based platform.
- New
- Research Article
- 10.1002/jsp2.70126
- Oct 20, 2025
- JOR Spine
- Haiyan Sun + 5 more
ABSTRACTBackgroundIntervertebral disc degeneration (IDD) is a widespread issue associated with chronic lumbar pain and disability. This study aimed to identify lactate metabolism‐related genes in IDD and elucidate their mechanistic roles in disease progression.MethodsIDD datasets were analyzed using R packages GEOquery, sva, and limma for data retrieval, batch correction, and normalization. Differential gene expression analysis identified significant genes between IDD and control groups, from which lactate metabolism‐related differentially expressed genes (LMRDEGs) were derived. Relationships among the LMRDEGs were assessed using Spearman's correlation analysis, and functional enrichment was conducted using ClusterProfiler. Gene set enrichment analysis identified biological processes associated with IDD. Diagnostic models were assessed using receiver operating characteristic (ROC) curve. Immune cell infiltration and correlations with core genes were analyzed via the CIBERSORT algorithm. Regulatory networks were constructed, and reverse transcription quantitative polymerase chain reaction (RT‐qPCR) was employed to validate the expression of hub LMRDEGs in IDD.ResultsA total of 1325 differentially expressed genes were identified, yielding seven LMRDEGs: TGFβ2, GSR, MB, MMP2, SLC16A7, PER2, and STAT3, which are enriched in blood circulation regulation and hypoxic response, as well as pathways like AGE–RAGE signaling in diabetic complications. ROC analysis indicated potential hub genes (MMP2, MB, TGFβ2, and PER2), while immune infiltration analysis uncovered significant variations in immune cell distribution. RT‐qPCR confirmed MMP2, MB, and SLC16A7 as molecular indicators reflecting lactate metabolism abnormalities in IDD.ConclusionThis study clarifies how lactate metabolism contributes to IDD through molecular mechanisms and its interplay with immunological features, providing a theoretical basis for understanding the early pathogenesis of IDD.
- New
- Research Article
- 10.62306/ausdomfgrbge
- Oct 19, 2025
- Digital Science
- Xuan Ouyang
The efficient integration, management, and application of multi-source heterogeneous spatio-temporal data remain a critical challenge in forest and grassland ecological informatics. Traditional GIS-based approaches often suffer from limited scalability, poor adaptability to diverse data modalities, and inadequate support for time-space linkage. To address these limitations, this study proposes a novel spatio-temporal data organization model based on the Geographical Subdivision Grid with One-dimension-integer on Two to n-th power (GeoSOT) encoding framework. We introduce a unified three-domain identifier—composed of spatial code, temporal stamp, and semantic attributes—to support fine-grained partitioning and indexing of both structured (e.g., vector, raster) and unstructured (e.g., video, sensor logs, text) data. The organization model employs multi-level GeoSOT grid cells as spatial anchors, integrating temporal semantics and object-level identifiers to form a one-code-per-element schema, ensuring the uniqueness and traceability of each data entity. A prototype system was implemented using forest resource and fire monitoring datasets from the Asia-Pacific Forestry Center. Comprehensive experiments demonstrate that the proposed model significantly improves data fusion flexibility, retrieval efficiency, and query precision compared to conventional spatial database models. Moreover, the system enables scalable and interactive spatio-temporal queries across multi-modal data sources. This study contributes a generalized, extensible, and semantically rich data organization framework that bridges spatial and temporal dimensions in forest and grassland applications. It holds promise for large-scale ecological monitoring, forest fire early warning, and smart forestry governance. Future work will focus on extending the model to real-time streaming data and integrating intelligent analytics for enhanced decision support.
- Research Article
- 10.1002/itl2.70142
- Oct 18, 2025
- Internet Technology Letters
- Zhiwei Yan + 2 more
ABSTRACTThe rapid advances of communication technologies have fueled significant IoT growth, with increasingly interconnected smart devices generating massive data. While various naming services and routing schemes have been proposed, existing solutions remain use‐case specific, with Information‐Centric Networking (ICN)/Named Data Networking (NDN) facing scalability challenges in edge networks and hybrid designs sacrificing naming flexibility, lacking a holistic architecture for future IoT networks. To address these challenges, we propose the Address Name Transfer Network (ANT‐Net), a novel architecture that uniquely combines seamless integration of heterogeneous naming services with enhanced data retrieval efficiency. ANT‐Net employs name‐based routing at the edge network and IP‐based routing within the core network, maintaining full TCP/IP compatibility for both naming and routing services. Furthermore, the distinct separation between edge and core networks in ANT‐Net allows for the implementation of different routing strategies, which can significantly improve data sharing and overall communication efficiency.