Related Topics
Articles published on Data Availability
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
34071 Search results
Sort by Recency
- New
- Research Article
- 10.1186/s42492-026-00214-4
- Feb 6, 2026
- Visual computing for industry, biomedicine, and art
- Berenice Montalvo-Lezama + 1 more
The limited availability of annotated data presents a major challenge in applying deep learning methods to medical image analysis. Few-shot learning methods aim to recognize new classes from only a few labeled examples. These methods are typically investigated within a standard few-shot learning paradigm, in which all classes in a task are new. However, medical applications, such as pathology classification from chest X-rays, often require learning new classes while simultaneously leveraging the knowledge of previously known ones, a scenario more closely aligned with generalized few-shot classification. Despite its practical relevance, few-shot learning has rarely been investigated in this context. This study presents MetaChest, a large-scale dataset of 479,215 chest X-rays collected from four public databases. It includes a meta-set partition specifically designed for standard few-shot classification, as well as an algorithm for generating multi-label episodes. Extensive experiments were conducted to evaluate both the standard transfer learning(TL) approach and an extension of ProtoNet across a wide range of few-shot multi-label classification tasks. The results indicate that increasing the number of classes per episode and the number of training examples per class improves the classification performance. Notably, the TL approach consistently outperformed the ProtoNet extension, even though it was not specifically tailored for few-shot learning. Furthermore, higher-resolution images improved the accuracy at the cost of additional computation, whereas efficient model architectures achieved performances comparable to larger models with significantly reduced resource requirements.
- New
- Research Article
- 10.1111/cobi.70223
- Feb 6, 2026
- Conservation biology : the journal of the Society for Conservation Biology
- Jessica R Marsh + 3 more
Following large-scale threatening events, a key challenge is to rapidly establish which species have been most affected and are in need of urgent conservation. For data-poor taxa, such assessments are challenging. In Australia, invertebrates represent over 90% of faunal diversity and are critical for ecosystem function, yet most are undescribed, and, of the described, most are poorly known. Thus, it is important to have a way to estimate susceptibility to major disturbance of data-deficient taxa. We developed a novel trait-based method for assessing the impact of a major wildfire on invertebrates. We applied it to 1220 species that showed high distributional overlap with the 2019-2020 Australian megafires. We estimated susceptibility based on the microhabitat species occupy, their life-history and ecological traits, and mechanisms that account for key data uncertainties (number of usable occurrence records, availability of traits data, and recency of taxonomic work). We found 748 species likely to be of potential conservation concern following the megafires; 169, 579, and 454 were highly, moderately, and mildly threatened by a major fire, respectively. Most species (867) were associated with poor or very poor data quality. Of the 867 poorly known species, 97 were most at risk from a major fire. Our approach is generalizable to other data-deficient taxa and to major disturbance events globally and can be used to improve representation of poorly known species in conservation assessments and threat mitigation decisions. If the uncertainties and knowledge gaps we identified are addressed, it is likely risk prediction could be improved.
- New
- Research Article
- 10.1080/01431161.2026.2625513
- Feb 6, 2026
- International Journal of Remote Sensing
- Abdulhakim M Abdi + 1 more
ABSTRACT Timely, detailed information on forest composition is essential for effective management, biodiversity protection, and understanding ecosystem dynamics. This study maps the distribution of seven dominant tree species in Swedish forests and produces spatially explicit, pixel-level estimates of classification uncertainty. The mapping framework integrates multitemporal Sentinel-1 radar and Sentinel-2 optical observations with field data from the Swedish National Forest Inventory and auxiliary predictors describing topography and canopy height. We trained a Bayesian-optimized extreme gradient boosting model on spatiotemporal metrics derived from these datasets and quantified classification confidence through entropy computed from the class-probability outputs. We applied a spatial block partitioning approach to limit the effects of spatial autocorrelation between optimization and validation data and ensure a more realistic assessment of the model’s generalization capacity. Model overall accuracy reached 85% (F1 = 0.82) using a 60 m spatial block validation. Under a more conservative 200 m block configuration, performance decreased to F1 = 0.63, reflecting reduced training data availability. The county-level species coverage derived from the classification aligned closely with published figures from the Swedish Forest Agency (Spearman’s ρ = 0.94, 95% CI: 0.89 – 0.96, p < 0.001). Variable importance analysis showed that Sentinel-2 spectral bands, particularly shortwave-infrared and red-edge captured during spring and summer, contributed most to species discrimination, while Sentinel-1 backscatter provided complementary structural information. The integration of forest inventory data, Earth observation, and machine learning to produce tree species maps and a spatially explicit measure of prediction uncertainty yields a robust and reproducible framework for large-area forest mapping. The results provide detailed, spatially continuous information on species composition along with an accompanying confidence surface. This offers practical value for ecological assessments, regional planning, and emerging legislative and environmental goals. The data are freely available for download and the maps can be interactively visualized using this link: https://ee-treespec.projects.earthengine.app/view/treespec.
- New
- Research Article
- 10.4401/ag-9319
- Feb 6, 2026
- Annals of Geophysics
- Marina Pastori + 14 more
The Ionian margin of southern Italy is one of the most complex geodynamic regions in the central Mediterranean, where ongoing convergence between the African and Eurasian plates results in intense seismic activity and highly heterogeneous crustal structures. To improve seismic monitoring in this region, within the framework of the ERC Advanced Grant FOCUS (2018‑2025), a temporary onshore seismic network (FXLand) was deployed along the Ionian coasts of Sicily and Calabria from December 2021 to June 2023, complementing a marine array of ocean‑bottom seismometers operating during the same period. In this study we describe the deployment and performance of the 13 temporary broadband stations of FXLand. The network was integrated in real time into the Italian national seismic surveillance system, enhancing data availability and coastal network geometry. During the deployment, FXLand recorded, more than 1,500 local earthquakes and more than 200 teleseismic events with magnitude M ≥ 6. We also present results from the analysis of three seismic sequences that occurred during the network operational period. The application of a Template Matching technique to the combined permanent station and FXLand network dataset, we significantly increased the number of detected low‑magnitude earthquakes in onshore area, improving catalog completeness compared to real‑time surveillance and Italian Seismic Bulletin. On the other hand, the offshore sequence highlights the main limitations of land‑based networks in detecting and accurately locating submarine seismicity. The integration of marine observations from the ocean‑bottom seismometer network in the Ionian Sea is expected to provide substantial improvements in the detection and location accuracy of offshore earthquakes, contributing to a more complete characterization of seismic activity along the Ionian margin.
- New
- Research Article
- 10.1093/aje/kwag028
- Feb 6, 2026
- American journal of epidemiology
- David J Muscatello + 9 more
During epidemics, emergency department (ED) syndromic surveillance of patient arrivals provides timely but non-virus-specific assessment of epidemic intensity. Surveillance of severe infection outcomes (intensive care admission or death) is less timely because outcomes can take weeks to occur. Time series models can be used to estimate the frequency of severe infection outcomes due to viruses. We developed and evaluated daily time series modelling applied to linked ED, infection and outcomes data from Australia to better predict population and health system burden during acute respiratory virus epidemics. In retrospective daily surveillance emulation, generalised additive models nowcasted (short-term forecast) the frequency of ED arrivals attributable to each of influenza and COVID-19 that will have a severe infection outcome within 28 days. Daily nowcasts spanned days -29 to -4 from each date for which surveillance was emulated. To validate the method, nowcasts were compared with subsequently observed severe infection outcome frequencies for December 2021 through February 2023. During this period, the mean daily day -4 nowcast error was 2.7 (34.2%), compared with 3.5 (43.8%) if outcomes known at day -1 were used. With increasing real-world data availability, this method could improve rapid, automated epidemic assessment for timely public health action.
- New
- Research Article
- 10.3390/brainsci16020195
- Feb 6, 2026
- Brain Sciences
- Metin Kerem Öztürk + 1 more
Objectives: Decoding neural patterns for RGB colors from electroencephalography (EEG) signals is an important step towards advancing the use of visual features as input for brain–computer interfaces (BCIs). This study aims to overcome challenges such as inter-subject variability and limited data availability by investigating whether transfer learning and signal augmentation can improve decoding performance. Methods: This research introduces an approach that combines transfer learning for cross-subject information transfer and data augmentation to increase representational diversity in order to improve RGB color classification from EEG data. Deep learning models, including CNN-based DeepConvNet (DCN) and Adaptive Temporal Convolutional Network (ATCNet) using the attention mechanism, were pre-trained on subjects with representative brain responses and fine-tuned on target subjects to parse individual differences. Signal augmentation techniques such as frequency slice recombination and Gaussian noise addition improved model generalization by enriching the training dataset. Results: The combined methodology yielded a classification accuracy of 83.5% for all subjects on the EEG dataset of 31 previously studied subjects. Conclusions: The improved accuracy and reduced variability underscore the effectiveness of transfer learning and signal augmentation in addressing data sparsity and variability, offering promising implications for EEG-based classification and BCI applications.
- New
- Research Article
- 10.1108/jadee-07-2025-0309
- Feb 5, 2026
- Journal of Agribusiness in Developing and Emerging Economies
- Qiankun Liu + 3 more
Purpose This study aims to explore how cross-border e-commerce (CBEC, enabled international businesses through platforms such as Alibaba International and JD Worldwide) and government policy support jointly affect agricultural export performance across regions in China. It particularly focuses on the moderating role of National High-Tech Zones (NHTZs, which serve as innovation hubs offering infrastructure and policy incentives) in strengthening the impact of CBEC on agricultural exports. Design/methodology/approach Using panel data from 31 Chinese provinces from 2009 to 2021, the study employs a robust empirical strategy combining an error correction model (ECM) to capture long-run equilibrium dynamics and geographically weighted regression (GWR) to examine spatial heterogeneity. Findings The results show that CBEC significantly boosts agricultural exports. More importantly, the number of NHTZs amplifies this effect, confirming a strong and positive moderating role. This synergistic effect is particularly pronounced in provinces with strong innovation-oriented institutional capacity, many of which are located in the eastern coastal region, while NHTZs play a more critical compensatory role in central and northern provinces where market-driven forces are weaker. Compared with central and northern regions, eastern and western provinces benefit more directly from CBEC development itself, and the marginal contribution of additional NHTZ support is relatively smaller. Research limitations/implications This study is limited by the availability of CBEC data at the provincial level, which required indirect estimation using customs and logistics proxies. Additionally, the dataset covers the period from 2009 to 2021, excluding recent years affected by the COVID-19 pandemic. Methodologically, the study employs established models (ECM and GWR) without incorporating advanced machine learning or dynamic forecasting techniques. Future research could explore post-pandemic shifts in policy impact, use real-time CBEC data and apply predictive models to simulate how regional innovation policies shape agricultural exports under evolving digital trade environments. Originality/value This study contributes to the growing literature on digital trade and regional policy by uncovering the moderating role of innovation-driven policy zones in facilitating agricultural exports. It also highlights the importance of differentiated region-specific strategies to enhance the effectiveness of e-commerce policies in the agricultural sector. The findings may offer useful policy insights for other developing countries.
- New
- Research Article
- 10.47852/bonviewaaes62028359
- Feb 5, 2026
- Archives of Advanced Engineering Science
- Sihe Wang + 4 more
This study demonstrates the feasibility and potential clinical value of a rule-based expert system for optimizing blood collection in pediatric patients, a population uniquely susceptible to iatrogenic anemia due to limited circulating blood volume and frequent laboratory testing. By systematically mapping ordered laboratory tests to tube-specific analytical and dead-volume requirements and applying patient-specific safety constraints based on weight and hematocrit, the system provides quantitative decision support at the time of test ordering. Evaluation using a simulated pediatric cohort (n = 20) representative of endocrine testing workflows showed that blood draw volumes were maintained within established safety thresholds in 16 of 20 cases (80%). Across the cohort, the optimized strategy achieved a mean reduction of 9.36 mL in total blood volume compared with standard collection practices. In the remaining cases, where optimization was not feasible due to extensive test panels or severely limited allowable blood volume, the system appropriately identified threshold violations and generated warning outputs rather than unsafe recommendations. These results highlight the system’s ability to both reduce unnecessary phlebotomy and reliably flag high-risk scenarios. Overall, this work establishes a transparent and reproducible technical framework for expert system–based optimization of pediatric blood draws and supports its future integration into clinical laboratory workflows to enhance patient safety and reduce avoidable blood loss. Received: 22 November 2025 | Revised: 4 January 2026| Accepted: 21 January 2026 Conflicts of Interest The authors declare that they have no conflicts of interest to this work. Data Availability Statement Data are available from the corresponding author upon reasonable request. Author Contribution Statement Sihe Wang: Conceptualization, Methodology, Software, Investigation, Resources, Data curation, Writing – review & editing, Visualization, Supervision, Project administration. Richard Desatnik: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data curation, Writing – original draft, Writing – review &; editing, Visualization. Motaz Hassan: Validation, Formal analysis, Writing – original draft, Writing – review &; editing, Visualization. Amanpreet Singh Wasir: Validation, Formal analysis, Writing – original draft, Writing – review &; editing, Visualization. Ajay Mahajan: Conceptualization, Methodology, Software, Investigation, Resources, Data curation, Writing – review & editing, Visualization, Supervision, Project administration.
- New
- Research Article
- 10.5194/essd-18-927-2026
- Feb 5, 2026
- Earth System Science Data
- Qinren Shi + 6 more
Abstract. On-road transportation is a major contributor to CO2 emissions in cities, and high-resolution CO2 traffic emission maps are essential for analyzing emission patterns and characteristics. In this study, we developed new hourly on-road CO2 emission maps with a 100 × 100 m resolution for 20 major cities in France, Germany, and the Netherlands in 2023. We used commercial Floating Car Data (FCD) based on anonymized GPS signals periodically reported by individual vehicles, providing hourly information on mean speed and the number of GPS sample counts per street. Machine learning models were developed to fill FCD data gaps and convert sample counts into actual traffic volumes, and the COPERT model was used to estimate speed- and vehicle-type-dependent emission factors. These models were calibrated using independent traffic observations available for Paris and Berlin, and subsequently applied to the remaining 18 cities in an extrapolated manner due to data availability constraints. Hourly emissions, initially estimated at the street level, were aggregated to 100 × 100 m grid cells. Annual on-road CO2 emissions across the 20 European cities in 2023 ranged from 0.4 to 7.9 Mt CO2, with emissions strongly correlated with urban area (R2= 0.98) and, to a lesser extent, population size (R2= 0.74). Spatially, emissions are either highly concentrated along major highways in cities such as Paris and Amsterdam or more evenly distributed in cities such as Berlin and Bordeaux, highlighting the need for context-specific mitigation strategies. Temporally, this study shows the CO2 emission fluctuations due to holiday periods, weekly activity cycles, and distinct usage profiles of different vehicle types. Due to the low latency of FCD, this approach could support near-real-time traffic emission mapping in the future. Our approach enhances the spatial and temporal characterization of CO2 emissions in on-road transportation compared to the conventional method used in gridded inventories, indicating the potential of FCD data for near-real-time urban emission monitoring and timely policy-making. The datasets generated by this study are available on Zenodo https://doi.org/10.5281/zenodo.16600210 (Shi et al., 2025).
- New
- Research Article
- 10.1002/widm.70068
- Feb 4, 2026
- WIREs Data Mining and Knowledge Discovery
- Chenlong Liu + 3 more
ABSTRACT Recommender systems are essential for information filtering but often suffer from the cold start problem caused by limited interaction data. Recent advances in deep learning (DL) and large language models (LLMs) have shown promise, yet systematic analysis of their effectiveness remains scarce. To address this gap, we introduce a paradigm‐driven taxonomy that categorizes solutions by their primary source of information: content, structure, transfer, and generation. Within this framework, DL methods have matured in leveraging content and structural information from interaction logs and multimodal data, while LLMs demonstrate advantages in text‐rich and data‐sparse environments through transfer‐based paradigms that exploit semantic understanding and pre‐trained knowledge. Furthermore, emerging generative approaches show potential for synthesizing data or relations to alleviate information scarcity. No universal solution exists; effectiveness depends on the dominant paradigm of a given scenario as well as data availability and computational cost. Combining DL and LLM offers substantial opportunities, including enhanced feature representation, data augmentation, and hybrid pipelines. However, research gaps persist, particularly the lack of standardized evaluation metrics and limited exploration of integration strategies. Addressing these challenges through a paradigm‐aware perspective could significantly improve the robustness and adaptability of the cold‐start recommendation in diverse contexts. This article is categorized under: Application Areas > Data Mining Software Tools Technologies > Machine Learning Technologies > Artificial Intelligence
- New
- Research Article
- 10.65922/h77aam43
- Feb 3, 2026
- ANUK College of Private Sector Accounting Journal
- Danladi + 3 more
The study examined the effect of firm characteristics (namely firm size, profitability and leverage liquidity) on the firm value of listed deposit money banks in Sub-Saharan Africa. A quantitative research design was adopted for the study. Secondary data were gathered from annual financial reports and other relevant disclosures covering the period from 2015 to 2024, with emphasis on banks publicly listed on stock exchanges across Sub-Saharan Africa. The study employed purposive sampling to select banks that met specific inclusion criteria, thereby ensuring the availability of reliable financial data for comprehensive analysis. The population of the study comprised 229 listed deposit money banks in Sub-Saharan Africa, from which a final sample of 141 banks was selected. Descriptive statistics, correlation analysis, and robust pooled regression techniques were employed to analyze the data, with heteroskedasticity-adjusted standard errors used to ensure reliable statistical inference. The findings revealed that firm size and profitability had positive and statistically significant effects on firm value, whereas leverage and liquidity exhibited statistically insignificant effects. The study concluded that the market valuation of deposit money banks in Sub- Saharan Africa was primarily driven by scale efficiency and sustainable profitability rather than by leverage or liquidity positions beyond regulatory thresholds. Based on these findings, the study recommended that bank management prioritize efficient asset growth and profitability enhancement strategies, while regulators continue to enforce prudential guidelines that support financial stability without encouraging excessive liquidity hoarding or leverage-driven value creation. Keywords: Firm value, Firm size, Profitability, Liquidity, and Leverage.
- New
- Research Article
- 10.3389/fmed.2026.1700529
- Feb 3, 2026
- Frontiers in Medicine
- Hongcai Li + 10 more
Objectives Artificial intelligence (AI) is increasingly being utilized across various fields of medicine, presenting significant potential for the future of healthcare. This review is to systematically outline the current applications of AI in the field of oral health management and to provide an in-depth analysis of the associated challenges and future opportunities. Methods The review was based on a systematic electronic literature search conducted across databases (PubMed, Web of Science, and Scopus) with the keywords including “artificial intelligence,” “AI in dentistry,” “tele-dentistry,” “oral health education,” and “oral health management.” English-language studies relevant to the application of AI across various aspects of oral health management were included based on independent assessments by two reviewers. Results We concluded that in the realm of oral health management, AI technology has diverse applications, including oral health education and counseling, monitoring, screening, diagnosis, treatment, follow-up care of oral diseases, and the collection and management of oral health data. By enhancing public awareness of oral health and improving self-management capabilities, AI can increase diagnostic accuracy, facilitate personalized treatments, support tele-dentistry, optimize the allocation of dental resources, and provide early warnings for oral diseases. These advancements collectively contribute to the efficiency and quality of oral health management. While AI demonstrates considerable promise in this field, several challenges remain, including inconsistencies in oral health data, limited availability and accessibility of data, the reliability of AI-driven results, and issues of bias and fairness in AI algorithms. Addressing these challenges is essential to fully harness the transformative potential of AI in oral health management. Conclusion Oral health management encompasses the comprehensive handling of oral health risk factors in individuals, populations, and communities through a series of measures and activities aimed at maintaining and promoting oral health. The ultimate goal is to achieve the greatest societal benefit in oral health at the lowest possible cost. By addressing challenges such as data consistency, availability, and reliability, as well as issues of bias and fairness in AI algorithms, AI may play a significant role in oral health management. Clinical relevance This paper reviews the role of artificial intelligence in the prevention, diagnosis and treatment of oral diseases, providing an important reference for the later application of artificial intelligence in oral health management.
- New
- Research Article
- 10.36368/jcsh.v3i1.1250
- Feb 3, 2026
- Journal of Community Systems for Health
- Olatubosun Akinola + 8 more
Introduction: Considerable attention has been directed towards implementing and strengthening community health management information systems (c-HMIS) in low-and middle-income countries. In 2012, the Zambian Ministry of Health with the support from the Clinton Health Access Initiative developed a c-HMIS. Guided by the by Atun’s framework for integrating interventions in health systems, we explored the acceptability and adoption of the c-HMIS in the community and district health system in Mpongwe District, Zambia. Methods: A qualitative case-study design was used to examine the integration process of the c-HMIS. Data were collected through phone-based in-depth interviews with 66 purposively selected participants from the community, facility, district, provincial, and national levels (including Neighborhood Health Committees leaders, community-based volunteers, community health assistants (CHAs), CHA supervisors, and Ministry of Health officials). Data were analyzed using thematic analysis. Results: The nature of the problem, which included the persistent issue of data quality deficiency motivated the Ministry of Health and stakeholders to adopt the c-HMIS. The attributes of the c-HMIS intervention such as the provision of data collection tools, training stakeholders in using these tools and the perceived simplicity of the c-HMIS facilitated the adoption process. Further, health system characteristics such as timely availability of data and improved health information feedback processes; as well as the broader adopting context such as community participation promoted community ownership of the c-HMIS. The c-HMIS implementation barriers included challenges with data collection tools and digital platforms. Conclusion: Overall, our findings indicate that while the c-HMIS has substantial potential to strengthen health information management systems, its sustained integration within the community and district health systems depends on leveraging some of the identified enablers and carefully addressing systemic, health system, and contextual barriers.
- New
- Research Article
- 10.47852/bonviewaia62027195
- Feb 3, 2026
- Artificial Intelligence and Applications
- Hryhoriy Kravtsov + 4 more
This study explored the key aspects and risks associated with the implementation of large language models (LLMs) in the electric power sector of Ukraine. We propose a unique taxonomy of risks, along with a hierarchical structure that enables their assessment using the analytic hierarchy process (AHP) developed by T. Saaty. The LLM lifecycle is described with a focus on both human and technological factors (from knowledge selection and training to operational deployment). The study addresses critical concerns related to confabulations, sensitive information leakage, compliance with personal data protection regulations, and the safeguarding of trade secrets. The paper highlights the importance of employing tools for hallucination detection, sentiment analysis, and legal compliance monitoring. A separate section presents an in-depth analysis of LLMs’ readiness to accurately digitize graphical content—such as schematics, diagrams, and technical drawings, which are common for documentation in the energy sector worldwide. A series of experiments using the state-of-the-art generative AI systems revealed significant limitations in interpreting complex diagrams, logical structures, and semantic relationships between elements. The findings demonstrate both the potential and the critical limitations of LLMs in energy-related applications, particularly in processing graphical content, making decisions based on synthetic data, and managing risks associated with model training, operation, and upgrades. Received: 13 August 2025 | Revised: 1 December 2025 | Accepted: 15 January 2026 Conflicts of Interest The authors declare that they have no conflicts of interest to this work. Data Availability Statement The data that support the findings of this study are openly available in Github at https://github.com/oleksandrkravchukatpimee/LLM-risks-evaluation/blob/3443a610d9b2f9a9f8db52b280f2f4fb247525c1/AHP.xlsx and https://gist.github.com/taranowskiatpimee/174973d140a84da2b5c3b365a34f949c. Author Contribution Statement Hryhoriy Kravtsov: Conceptualization, Methodology, Writing – original draft, Writing – review & editing, Supervision. Oleksandr Kravchuk: Methodology, Validation, Investigation, Writing – original draft, Writing – review & editing. Artem Taranowski: Methodology, Investigation, Resources, Writing – original draft, Writing – review & editing. Dmytro Sinko: Methodology, Investigation, Writing – original draft. Victor Samoylov: Conceptualization, Supervision.
- New
- Research Article
- 10.3390/urbansci10020093
- Feb 3, 2026
- Urban Science
- Dalton Domingues De Carvalho Neto + 5 more
This study examines the role of Low and Zero Emission Zones (LEZ/ZEZ) as urban climate-governance instruments in Latin American cities, using Rio de Janeiro as a case study. The objective is to assess the feasibility and institutional readiness for implementing a LEZ/ZEZ in the city’s central area, taking into account its regulatory framework, urban context, and transport- and emissions-related conditions. The methodology adopts an exploratory, qualitative approach based on the ASIF (Activity-Structure-Intensity-Fuel) framework, combined with a systematic review of municipal legislation, climate action plans, emissions inventories, and international best practices. Rather than developing a mathematical or predictive model, the study organizes these policy and institutional elements into a structured decision-support framework and proposes a roadmap to guide phased implementation. The results show that Rio de Janeiro possesses a favorable legal and policy environment for LEZ/ZEZ deployment, particularly through its Climate Action Plan and the legally established District of Low Emissions, while also identifying constraints related to data availability, monitoring capacity, and inter-institutional coordination. The study concludes that the proposed framework provides a practical governance-oriented tool to support low-carbon urban transitions, whose operational effectiveness will depend on future quantitative data collection, transport-demand simulation, and stakeholder engagement to strengthen evidence-based decision-making.
- New
- Research Article
- 10.36962/etm32012026-184
- Feb 2, 2026
- ETM Equipment Technologies Materials
- Zaming Ismayilov Zaming Ismayilov + 1 more
The application of modern software tools for three-dimensional geological modeling plays a crucial role in the effective analysis and development of oil and gas reservoirs. By enabling the comprehensive integration and systematization of geological, geophysical, and petrophysical data, these software solutions ensure the construction of accurate and reliable three-dimensional geological models. Such models form the foundation for subsequent hydrodynamic simulations and significantly enhance the quality of reservoir characterization, reserve estimation, and development planning. The accurate interpretation of well logging data remains one of the most critical components of the modeling workflow. Both interval-based and continuous interpretation methods contribute valuable insights into reservoir properties, including filtration–storage parameters and lithological variations. While interval-based interpretation offers simplicity and efficiency, continuous interpretation provides a more detailed and realistic representation of vertical heterogeneity. The ability of modern software systems to support both approaches within a unified modeling environment allows specialists to select and combine methods in accordance with geological complexity and data availability. Furthermore, the transition from paper-based well logs to fully digitized datasets represents a key step toward improving the accuracy of layer correlation and parameter determination, particularly for mature fields with long development histories. The use of digital logging data enables the application of advanced interpretation techniques and significantly increases the reliability of three-dimensional geological and hydrodynamic models. In summary, the integration of advanced interpretation methods, high-quality digital data, and modern modeling software ensures a more realistic representation of subsurface conditions. This integrated approach ultimately leads to improved decision-making, optimized reserve management, and more efficient and sustainable development of hydrocarbon fields. Keywords: three-dimensional geological modeling; reservoir modeling; well log interpretation; petrophysical parameters; filtration–storage properties; seismic data integration; digital well logs; hydrodynamic modeling; reserve estimation; uncertainty analysis.
- New
- Research Article
- 10.1016/j.ijpp.2026.01.008
- Feb 2, 2026
- International journal of paleopathology
- Ricardo A M P Gomes + 1 more
Cribra orbitalia and cribra cranii in perspective: Rethinking etiology through life course and ONE Paleopathology approaches.
- New
- Research Article
- 10.1093/gbe/evaf234
- Feb 2, 2026
- Genome Biology and Evolution
- Lucas Anchieri + 3 more
The increased availability of genomic data from ancient humans allows estimating the strength of natural selection at a given locus using time series data. Several methods have been developed for this purpose and were originally validated through simulations using mostly large sample sizes. However, human ancient DNA (aDNA) data typically have high missingness and include only a small uneven number of individuals per time point, making estimations of selection challenging. Here, we benchmark the inference of selection with aDNA-like time series datasets using extensive simulations and four methods: ApproxWF, BMWS, Slattice, and Sr. We test several sampling schemes of time series data and selection coefficients (s) and focus on whether one can infer selection using time series datasets with sparse data and small sample sizes. We show that detecting selection with aDNA data is possible for strong s with a sample size of ~100 when assuming constant Ne. While ApproxWF performs best across simulations, the other methods present more variable results and do not perform well for typical aDNA datasets. Importantly, generally low false positive rates (<6%) highlight low risks to falsely detect selection when the loci are evolving neutrally. Moreover, relatively high power (>90%) for s ≥ 0.02 (Nes ≥ 200) shows that strong selection can generally be detected with confidence. We also show that more homogenous sampling improves the accuracy of the estimations. Finally, we provide recommendations for future research aiming to estimate selection with aDNA, noting the importance of spreading data evenly across time and avoiding time points with extremely small sample sizes.
- New
- Research Article
- 10.1016/j.jenvman.2026.128684
- Feb 1, 2026
- Journal of environmental management
- Rebecca L Morris + 3 more
Building an evidence base for living shorelines: a framework for evaluating the extent and adequacy of post-establishment monitoring programs.
- New
- Research Article
- 10.2308/tar-2023-0545
- Feb 1, 2026
- The Accounting Review
- James J Blann + 1 more
ABSTRACT The FASB recently issued ASU 2024-03, which requires disaggregation of significant expenses, like cost of goods sold (COGS) and selling, general, and administrative (SG&A) expenses. Proponents argue disaggregation will improve decision usefulness, whereas opponents suggest the information will be costly and provide little value. We provide large-sample evidence on the pre-ASU state of expense disaggregation, analyze whether it appears to provide decision-useful information, and explore differences across disaggregation components. Our findings suggest that disaggregation is relatively common, increasing over time, and correlated with demand for disclosure, disclosure incentives, and firm economics. Further, our evidence is consistent with COGS, but not SG&A, disaggregation providing decision-useful information for investors and analysts, and these benefits accrue via improved processing of expense-related news. Overall, our evidence suggests that not all disaggregation is equal. We also identify novel, large-sample expense disaggregation measures for U.S. firms, which are likely useful for evaluating other implications of disaggregation. Data Availability: Data are available from the public sources cited in the text. JEL Classifications: G18; M41; M48.