Related Topics
Articles published on Tail Of Distribution
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
5723 Search results
Sort by Recency
- Research Article
- 10.1002/cmdc.202501104
- Mar 13, 2026
- ChemMedChem
- Natalie Hanheiser + 3 more
Cationic surfactants, in particular quaternary ammonium compounds (QACs), represent one of the most relevant and broadly applied classes of antiseptics. Their antimicrobial activity arises from electrostatic interactions with microbial membranes, resulting in rapid disruption of the membrane structure. In this review, we summarize currently described mechanistic insights into the membrane active behavior of QACs, thereby focusing on the interplay between molecular architecture, supramolecular organization and antimicrobial efficacy. Key structure activity relationships (SARs) are discussed, including the role of the hydrophobic tail length, spacer design, charge density and distribution, and counterion effects. Addressing challenges such as antimicrobial resistance and biocompatibility requires a detailed understanding of SARs and the mechanism behind resistance development. Therefore, we further highlight emerging concepts such as cleavable linkers, hybrid systems integrating metal, peptide or photodynamic modalities, supramolecular aggregates, and the integration of biodegradable materials for the design of surfactants capable of overcoming bacterial resistance and tuning selectivity toward bacterial cells. This review provides an updated framework for developing next-generation QACs that preserve antimicrobial potency while minimizing toxicity and the evolution of resistant microbial populations.
- Research Article
- 10.1111/anzs.70042
- Mar 1, 2026
- Australian & New Zealand Journal of Statistics
- Tingting Tong + 5 more
ABSTRACT Gain‐Probability (G‐P) analysis quantifies the probability that a randomly selected individual from one group scores higher or lower than an individual from another group, by varying magnitudes. While G‐P methods have been developed under normality and various skewed distributions, symmetric heavy‐tailed settings remain largely unexplored, despite their prevalence in finance, environmental science, and other applied domains. We extend the G‐P framework to the broad family of scale mixtures of normal (SMN) distributions, including the Student's t, slash, variance gamma (VG), and Pearson Type VII distributions. Analytical expressions for G‐P under SMN are derived for both independent and matched data, and parameter estimation is performed using the expectation maximisation (EM) algorithm. Simulation studies show that the proposed estimators are accurate, robust to heavy tails, and improve with sample size, with performance most sensitive to group separation and noise level. An application to daily returns of US and Chinese equity indices demonstrates how G‐P analysis captures distributional tail effects that are overlooked by traditional tests. The results support G‐P analysis under SMN as a practical, interpretable alternative to significance testing, enabling robust inference for symmetric heavy‐tailed data in diverse applied settings.
- Research Article
- 10.1080/10618600.2026.2635637
- Feb 27, 2026
- Journal of Computational and Graphical Statistics
- Jilei Lin + 3 more
Existing methods for spatial data often struggle to capture heterogeneous patterns over complex domains or ignore heterogeneity in the tails of the response distribution. We introduce a quantile spatial model framework that accommodates both spatial nonstationarity and tail heterogeneity through constant and spatially varying coefficients. We propose a smoothed quantile bivariate triangulation (SQBiT) method based on penalized splines on triangulation and convolution smoothing of the quantile loss. The developed method can effectively capture spatial nonstationarity while preserving critical data features such as shape and smoothness across complex and irregular domains. Under some regularity conditions, we show that the proposed estimator can achieve an optimal convergence rate under the L 2 -norm. In addition, we establish the Bahadur representation of the estimator, which allows us to establish the asymptotic normality for the constant coefficient estimator and construct asymptotic confidence intervals. To improve finite-sample performance, we also consider a wild bootstrap method for constructing confidence intervals. Simulations highlight the numerical and computational advantages of SQBiT over existing methods. Applying SQBiT to U.S. mortality data reveals how socioeconomic factors influence mortality rates differently across spatial regions and distribution tails. An R package implementing SQBiT is available on GitHub.
- Research Article
- 10.1108/imefm-10-2025-0764
- Feb 25, 2026
- International Journal of Islamic and Middle Eastern Finance and Management
- Paresh Kumar Narayan + 3 more
Purpose This study aims to examine how climate change affects economic growth in Indonesia – the world’s largest Muslim-majority country with a dual banking system – by analyzing the distribution of future growth risks rather than average outcomes. Design/methodology/approach The paper employs a Growth at Risk (GaR) framework, integrating climate variables into ordinary least sqaures and Quantile Regression models using quarterly data from 2008Q1 to 2023Q3. This approach allows the assessment of climate impacts across different states of the economic cycle and forecasting horizons. Findings The results reveal a nonlinear and state-dependent relationship between climate change and economic growth. Climate change has its strongest and statistically significant effects at the lower tail of the growth distribution, where climate-induced fiscal stimulus supports economic recovery during downturns. Research limitations/implications The analysis is conducted at the national level and does not explicitly model differential transmission channels between Islamic and conventional banks, which could be explored in future research. Practical implications The findings suggest that climate-responsive fiscal policy can play a stabilizing role during periods of economic weakness, particularly in dual-banking systems where risk-sharing financial structures may enhance resilience to climate shocks. Social implications By highlighting the role of fiscal responses and inclusive financial systems in mitigating climate-related downturns, the study informs policy strategies aimed at protecting livelihoods and supporting sustainable growth in climate-vulnerable, Muslim-majority economies. Originality/value This study extends the GaR literature by incorporating climate change as a key predictor of growth risk and by contextualizing the analysis within a Muslim-majority, dual-banking economy, offering new insights into the interaction between climate shocks, fiscal policy and financial system structure.
- Research Article
- 10.3390/drones10020154
- Feb 23, 2026
- Drones
- Zhaohan Li + 5 more
The rapid development of the Internet of Vehicles (IoV) has significantly increased data transmission demands, frequently causing backhaul congestion and service delays in traditional static cellular networks. To address these challenges, this paper proposes a joint position deployment and hierarchical caching optimization solution for unmanned aerial vehicle (UAV)-assisted vehicle-to-vehicle (V2V) caching networks towards dynamic vehicle distribution. Firstly, a hierarchical caching architecture is proposed, where the file library is classified into core, supplementary, and infrequent layers based on file popularity, applying deterministic caching, probabilistic caching, and no-caching strategies, respectively, to achieve efficient utilization of caching resources. Secondly, the mathematical expressions for the caching hit rate and service delay are derived, and a joint optimization problem is formulated to minimize service delay, addressing the dual challenges of hierarchical caching and UAV deployment. To address this problem, a decoupled iterative method is designed, decomposing the original problem into hierarchical caching and UAV deployment subproblems. Based on this, a grid search–tail distribution function fitting-based approach and a K-means clustering-based approach are proposed to optimize these subproblems, respectively. Finally, simulation results demonstrate that, compared to existing strategies, the proposed strategy effectively reduces service latency under multi-vehicle distribution while maintaining high cache file coverage. Under typical conditions, the proposed strategy reduced average service latency by 10% to 20%, thereby validating its effectiveness and superiority.
- Research Article
- 10.1371/journal.pbio.3003653
- Feb 13, 2026
- PLoS biology
- Pietro Pollo + 7 more
Biological differences between males and females are pervasive. Researchers often focus on sex differences in the mean or, occasionally, in variation, albeit other measures can be useful for biomedical and biological research. For instance, differences in skewness (asymmetry of a distribution), kurtosis (heaviness of a distribution's tails), and correlation (relationship between two variables) might be crucial to improve medical diagnosis and to understand natural processes. Yet, there are currently no meta-analytic ways to measure differences in these metrics between two groups. We propose three effect size statistics to fill this gap: Δsk, Δku, and ΔZr, which measure differences in skewness, kurtosis, and correlation, respectively. Besides presenting the rationale for the calculation of these effect size statistics, we conducted a simulation to explore their properties and used a large dataset of mice traits to illustrate their potential. For example, in our case study, we found that females show, on average, a greater correlation between fat mass and heart weight than males. Although calculating Δsk, Δku, and ΔZr will require large sample sizes of individual data, technological advancements in data collection create increasedopportunities to use these effect size statistics. Importantly, Δsk, Δku, and ΔZr can be used to compare any two groups, allowing a new generation of meta-analyses that explore such differences and potentially leading to new insights in multiple fields of study.
- Research Article
- 10.1038/s41598-026-37630-8
- Feb 13, 2026
- Scientific reports
- Yunzhe Wang + 3 more
The safety operation and maintenance of mega-structures in China are increasingly challenged by rare but high-impact structural failures. To address the difficulty in accurately estimating the low-probability tail of the response distribution, we propose a novel framework centered on the Tail-Sensitive Global Learning (TS-GL) algorithm. Unlike existing active learning-based Gaussian process (AL-GP) metamodels, TS-GL introduces a tail-focused search mechanism with a newly designed weight function, significantly improving the estimation of one-sided tail probabilities. To ensure computational practicality, the effect of different activation functions on iteration efficiency is also examined. The method is validated on a classical nonlinear system-the bond-slip relationship between steel and concrete-relevant to anchorage connections in subway tunnels. Insufficient anchorage length can cause excessive bolt slip and deformation, leading to gaps and leakage in underground structures. TS-GL outperforms AL-GP in both accuracy and efficiency when quantifying such rare events, providing a practical tool for uncertainty analysis in critical infrastructure.
- Research Article
- 10.1371/journal.pone.0338833.r006
- Feb 13, 2026
- PLOS One
- Christopher Boon Sung Teh + 4 more
Weather generators are crucial for agricultural modeling in tropical regions, where historical weather data are often scarce or incomplete. This study introduces MsiaGen, a stochastic daily weather generator for Malaysia’s tropical climate, emphasizing computational simplicity, site-specific parameterization, and p ractical applicability. The model was calibrated using data from 12 sites across Malaysia and validated at 11 independent sites, encompassing diverse climatic conditions from Peninsular to East Malaysia. MsiaGen uses a Skew Normal distribution for air temperatures to capture observed asymmetries, particularly in maximum temperatures, while utilizing Weibull and Gamma distributions for wind speed and rainfall, respectively. The generator incorporates first-order autoregressive processes for temporal dependencies and a two-state Markov chain for wet/dry day sequencing. Validation showed strong monthly-scale performance, with mean absolute errors below 1.2% for temperatures, 2.4% for wind speed, and 1.8% for rainfall, along with near-zero model bias and high overall model agreement scores (Kling-Gupta Efficiency metric >0.8). Daily scale validation using quantile-quantile plots revealed excellent agreement for temperature distributions, with points clustering tightly along the identity line within common ranges (21–28 °C for minimum and 25–39 °C for maximum temperatures). Empirical cumulative distribution function analysis indicated that 85 ± 10% of daily temperature errors were within ±2.0°C, 94 ± 6% of wind speed errors were within ±1.0 m s ⁻ ¹, and 83 ± 5% of rainfall errors were within ±20 mm. However, performance declined for extreme events, particularly rainfall exceeding 80–100 mm and wind speeds above 3–4 m s-1, likely due to distribution tail limitations and short observational records (3–5 years). Further validation using oil palm yield simulations at two independent plantation sites demonstrated that generated weather reproduced temporal dynamics across multiple planting densities. MsiaGen offers a practical and data-efficient tool for tropical agricultural research.
- Research Article
- 10.1371/journal.pone.0338833
- Feb 13, 2026
- PloS one
- Christopher Boon Sung Teh + 2 more
Weather generators are crucial for agricultural modeling in tropical regions, where historical weather data are often scarce or incomplete. This study introduces MsiaGen, a stochastic daily weather generator for Malaysia's tropical climate, emphasizing computational simplicity, site-specific parameterization, and p ractical applicability. The model was calibrated using data from 12 sites across Malaysia and validated at 11 independent sites, encompassing diverse climatic conditions from Peninsular to East Malaysia. MsiaGen uses a Skew Normal distribution for air temperatures to capture observed asymmetries, particularly in maximum temperatures, while utilizing Weibull and Gamma distributions for wind speed and rainfall, respectively. The generator incorporates first-order autoregressive processes for temporal dependencies and a two-state Markov chain for wet/dry day sequencing. Validation showed strong monthly-scale performance, with mean absolute errors below 1.2% for temperatures, 2.4% for wind speed, and 1.8% for rainfall, along with near-zero model bias and high overall model agreement scores (Kling-Gupta Efficiency metric >0.8). Daily scale validation using quantile-quantile plots revealed excellent agreement for temperature distributions, with points clustering tightly along the identity line within common ranges (21-28 °C for minimum and 25-39 °C for maximum temperatures). Empirical cumulative distribution function analysis indicated that 85 ± 10% of daily temperature errors were within ±2.0°C, 94 ± 6% of wind speed errors were within ±1.0 m s ⁻ ¹, and 83 ± 5% of rainfall errors were within ±20 mm. However, performance declined for extreme events, particularly rainfall exceeding 80-100 mm and wind speeds above 3-4 m s-1, likely due to distribution tail limitations and short observational records (3-5 years). Further validation using oil palm yield simulations at two independent plantation sites demonstrated that generated weather reproduced temporal dynamics across multiple planting densities. MsiaGen offers a practical and data-efficient tool for tropical agricultural research.
- Research Article
- 10.1287/mnsc.2023.03236
- Feb 12, 2026
- Management Science
- Chenxu Li + 2 more
This paper proposes and implements a novel nonparametric method for estimating the state price density (SPD) over the entire state space, including the tails. This SPD estimator achieves shape consistency properties in theory, particularly at the tails. Monte Carlo simulations demonstrate the accuracy and robustness of our method. In particular, our estimator accurately captures the risk-neutral tail distribution, which is often underestimated by existing alternative methods. In an empirical analysis based on Standard and Poor’s 500 options data, we evaluate the out-of-sample performance of our SPD estimation method and demonstrate that the estimates can serve as effective indicators for market conditions and exhibit predictive power for asset returns. Combining these perspectives, we suggest that our SPD estimator renders a valuable tool for risk management and asset pricing. This paper was accepted by Kay Giesecke, finance. Funding: The research of C. Li was supported by the Guanghua School of Management, the Center for Statistical Science, the High-Performance Computing Platform, and the Key Laboratory of Mathematical Economics and Quantitative Finance (Ministry of Education) at Peking University as well as the National Natural Science Foundation of China [Grant 72173003]. The research of X. Song was supported by the Guanghua School of Management, the Center for Statistical Science, and the Key Laboratory of Mathematical Economics and Quantitative Finance (Ministry of Education) at Peking University as well as the National Natural Science Foundation of China [Grants 72373007 and 72333001]. The research of Y. Wan was supported by the School of Management Science and Engineering and the Coordinated Innovation Center for Computable Modeling in Management Science at Tianjin University of Finance and Economics as well as the Tianjin Municipal Education Commission [Grant 2022SK188]. Supplemental Material: The online appendix and data files are available at https://doi.org/10.1287/mnsc.2023.03236 .
- Research Article
- 10.1371/journal.pclm.0000808
- Feb 4, 2026
- PLOS Climate
- María Dolores Gadea Rivas + 1 more
Climate change exhibits substantial variability across both space and time, requiring mitigation and adaptation strategies that effectively address challenges at global and local scales. Accurately capturing this variability is essential for assessing climate impacts, attributing underlying causes, and formulating effective policies. This study introduces simple yet robust quantitative methods to detect local warming, distinguish among different types of warming, and compare warming trends across contiguous U.S. states using the concept of warming dominance. In contrast to traditional approaches that focus solely on average temperatures, our analysis rigorously and systematically examines the entire distribution of daily temperatures for the contiguous United States from 1950 to 2021. The results reveal that, while 44% of states show no statistically significant warming based on average temperature trends, a much larger proportion—84%—exhibit warming when assessing various quantiles of the distribution. Statistical significance is evaluated using HAC-robust t -tests at the 5% significance level (95% confidence), ensuring that detected warming reflects genuine shifts rather than random variability. These findings underscore the substantial heterogeneity in warming patterns: some states, such as those located in the so-called “Warming Hole,” display no evidence of warming at any quantile; others experience more pronounced warming in either the lower or upper tails of the temperature distribution; and a few states show consistent warming across all quantiles. The study concludes by identifying which states exhibit warming dominance over others and which appear comparatively less affected. These insights are particularly important in the United States, where climate policy is formulated and implemented at both federal and state levels.
- Research Article
- 10.1098/rsos.251853
- Feb 4, 2026
- Royal Society Open Science
- Eva Viviani + 2 more
Abstract Human language is characterized by productivity, that is, the ability to use words and structures in novel contexts. How do learners acquire these productive systems? Under a discriminative learning approach, language learning involves using cues to predict and discriminate linguistic outcomes and ‘generalization’ involves dissociating idiosyncratic irrelevant cues in favour of informative, invariant cues. The current work tests the predictions of this account using the learning of spatial adpositions as a test case. Spatial adpositions describe the location of one object in relation to another (e.g. English prepositions ‘above’ and ‘below’) and may occur in reversible sentences, such as the picture is above the window; generalization involves using these terms in novel contexts, such as with unattested nouns. Computational simulations implementing an error-driven, discriminative learning process, demonstrate that broadening the irrelevant cues associated with the stimuli may boost the discovery of invariant cues, i.e. the association between the adposition and the spatial relation. We explored the predictions of these models in human learners by adapting a training paradigm introduced by Hsu & Bishop (Hsu, Bishop 2014 PeerJ2, e656 (doi:10.7717/peerj.656)) to teach typically developing 7–8-year olds spatial adpositions in an unfamiliar language (Japanese) using a computerized learning game. We manipulated the cue variability by comparing groups of children trained with more variable sentences (HV) with a condition with repetition of the same sentences (LV). A third condition (skew) tested whether learning and generalization are boosted when learning from a heavy tailed distribution that more closely resembles that of natural language. We examined the following predictions: (i) for sentences with novel nouns, participants trained with variable sentences will show better performance (i.e. stronger generalization) than those trained with repeated sentences; (ii) in contrast, those trained with repeated sentences will show stronger performance in training itself (i.e. stronger item learning); and (iii) training with a heavy tailed distribution—more closely resembling the natural one—will lead to the strongest item learning and generalization. In our main analyses, for (i) we found clear evidence that the HV condition outperformed the LV condition in generalization, in line with predictions of the computational model when trained on the same datasets. However, for (ii) the frequency advantage was not clearly observed and for (iii) skewed input did not provide an additional benefit over variability (with Bayesian evidence for the null that it was beneficial for generalization). Interestingly, the fact that the skew condition did not outperform the high variability benefit was in fact consistent with the computational modelling, although skew was found to be supportive in other domains. Finally, exploratory analyses indicate interesting individual differences in how learners respond to variability and frequency in their input, which may be owing to their current environment as well as learner characteristics.
- Research Article
- 10.12688/f1000research.172616.1
- Feb 3, 2026
- F1000Research
- Abbas Najim Salman + 1 more
Over the past decade, the statistical and reliability literature has seen an enormous rise in the number of various probability distributions. Many of them are designed to integrate, enhance, or add to traditional models. This rise shows that people are becoming more aware of empirical integrity, flexibility, and adaptability, especially when it comes to modeling income distributions, dependability difficulties, and longevity statistics. In this paper, more general and flexible distributions are provided by employing a technique that generates the T−R {Y} family distributions as well as a brief overview for this kind of family. This technique may be applied to agree with certain data distribution types, including very left, right, thin or heavy tailed distribution, or to develop new distributions that are very applicable and flexible. In this context, produces a distinct member of a new sub-family called the IDNAI distribution, which is referred to the Lomax–Rayleigh {Exponential} distribution. The most important statistical and mathematical characteristics of the new family distribution are discussed. In order to estimate the parameters of the new class distribution, two estimation techniques will be additionally presented: the maximum likelihood method (MLE) and the least square method (LS). In order to analyze the effectiveness of suggested estimation techniques, simulation tools have been constructed. The results of the comparison indicated that the maximum likelihood method performed better than the least squares approach in terms of bias and mean squared error criteria. However, the application of a real dataset showed that, depending on several kinds of information criteria, the new class distribution fit the data more effectively than the specific classical and moderate distributions.
- Research Article
- 10.17016/feds.2021.072r1
- Feb 1, 2026
- Finance and Economics Discussion Series
- Nathan Blascak + 1 more
Using linked mortgage application and credit bureau data, we document the existence of unconditional and conditional gender gaps in the distribution of total credit card limits for sole mortgage applicants. We estimate that male borrowers have approximately $1,300 higher total credit card limits than female borrowers. This gap is primarily driven by a large gender gap in the right tail of the limit distribution. At the median and in the left tail of the total limit distribution, women’s limits are approximately $100 to $300 higher than men’s. Results from a Kitagawa-Oaxaca-Blinder decomposition show that 87 percent of the gap is explained by differences in the effect of observed characteristics, while 10 percent of the difference is explained by differences in the levels of observed characteristics. The gap is persistent across geographies but has varied over time. Overall, these gender gaps are small in economic magnitude and have changed over time favoring women.
- Research Article
- 10.1063/5.0305798
- Feb 1, 2026
- Physics of Plasmas
- Alireza Ganjovi + 5 more
In this work, the influences of argon dilution on energy redistribution in a low-pressure RF-ICP methane plasma discharge are studied. A combination of Optical Emission Spectroscopy, Residual Gas Analysis, and x-ray Photoelectron Spectroscopy is employed to correlate plasma energetics with gas-phase chemistry and film composition. Argon addition is shown to increase electron density while lowering electron, excitation, and vibrational temperatures, thereby redistributing the absorbed power and reducing the high-energy tail of the electron energy distribution that drives bond scission. As a result, methane conversion and hydrogen yield decline, which is consistent with a reduction in vibrationally primed targets rather than electron scarcity. Importantly, the methane conversion rate varies nonlinearly with argon concentration: small fractions (∼15% Ar) can induce disproportionately higher conversion rates compared to simple dilution expectations. Thus, the active role of argon in shaping plasma reactivity through metastable-driven pathways is revealed. On the other hand, at the surface of sample holder inside the RF-ICP reactor, the modest argon additions improve film chemistry by lowering oxygen incorporation and reducing oxygenated functionalities, which is attributable to gentle Ar+/Ar* sputter-cleaning during growth. Taken together, these results define a practical operating window at low-moderate argon fractions (≈25%–30%), sufficient to stabilize the discharge and enhance film purity without excessively suppressing vibrational excitation. In addition, this study highlights the broader technological implications of low-pressure Ar/CH4 plasmas discharges, offering practical guidelines for optimizing hydrogen production and advanced carbon materials in industrial plasma processes.
- Research Article
- 10.3390/risks14020026
- Jan 31, 2026
- Risks
- Zhiyong (John) Liu + 3 more
Businesses increasingly rely on algorithmic systems and machine learning models to make operational decisions about customers, employees, and counterparties. These “algorithmic operations” can improve efficiency but also concentrate liability in a small number of technically complex, drifting models. Algorithmic operations liability (AOL) risk arises when these systems generate legally cognizable harm. We develop a simple taxonomy of AOL risk sources: model error and bias, data quality failures, distribution shift and concept drift, miscalibration, machine learning operations (MLOps) and integration failures, governance gaps, and ecosystem-level externalities. Building on this taxonomy, we outline a simple analysis of AOL risk pricing using some basic actuarial building blocks: (i) a confusion-matrix-based expected-loss model for false positives and false negatives; (ii) drift-adjusted error rates and stress scenarios; and (iii) credibility-weighted rates when insureds have limited experience data. We then introduce capital and loss surcharges that incorporate distributional uncertainty and tail risk. Finally, we link the framework to AOL risk controls by identifying governance, documentation, model-monitoring, and MLOps practices that both reduce loss frequency and severity and serve as underwriting prerequisites.
- Research Article
- 10.1017/s0266466625100315
- Jan 30, 2026
- Econometric Theory
- Vladislav Morozov
We develop a methodology for conducting inference on extreme quantiles of unobserved individual heterogeneity (e.g., heterogeneous coefficients and treatment effects) in panel data and meta-analysis settings. Inference is challenging in such settings: only noisy estimates of heterogeneity are available, and central limit approximations perform poorly in the tails. We derive a necessary and sufficient condition under which noisy estimates are informative about extreme quantiles, along with sufficient rate and moment conditions. Under these conditions, we establish an extreme value theorem and an intermediate order theorem for noisy estimates. These results yield simple optimization-free confidence intervals (CIs) for extreme quantiles. Simulations show that our CIs have favorable coverage and that the rate conditions matter for the validity of inference. We illustrate the method with an application to firm productivity differences across areas of varying population density. By analyzing the left tails of the productivity distributions, we find no evidence of stronger firm selection in more densely populated areas.
- Research Article
- 10.1038/s41597-026-06633-5
- Jan 28, 2026
- Scientific data
- Keran Li + 11 more
Deep learning has become a key tool for carbonate thin-section image analysis. However, the lack of large public datasets limits reproducibility and fair model comparison. To address this, we present DeepCarbonate, a cleaned and standardized benchmark dataset. Samples were collected from the Ediacaran Dengying, Cambrian Longwangmiao, and Triassic Leikoupo and Jialingjiang Formations in the Sichuan Basin, China, and the Cretaceous Mishrif Formation in the UAE. The dataset was curated by petroleum geology experts; invalid images (blurred, low brightness, or corrupted) were removed through expert voting and 2σ filtering, and all images were reorganized in the ImageNet format. DeepCarbonate contains 22 lithological categories, hierarchically organized by optical mode (PPL, XPL, R) and split into train, validation, and test subsets, ensuring standardized benchmarking and reproducible experiments. Using PyTorch with CUDA acceleration, we evaluated ResNet, VGG, DenseNet, MobileNet, and EfficientNet models under baseline, ablation, long tailed distribution, and balanced Top 9 subset experiments. Results highlight the dataset's value as a robust benchmark for carbonate petrography research and applications.
- Research Article
- 10.35848/1347-4065/ae334a
- Jan 23, 2026
- Japanese Journal of Applied Physics
- Eunseok Oh + 1 more
Abstract A Long Short-Term Memory (LSTM) neural-network framework is presented to predict the temporal evolution of threshold-voltage ( V t ) distributions and estimate retention-induced lifespan in 3D NAND Flash. Rather than modeling individual mechanisms, the model learns percentile-wise V t trajectories from large-scale simulation data. The 3% V t of PV7—effective in determining page-level lifespan—is predicted up to four steps ahead with high R2 and negligible deviation from ground truth. Full-distribution prediction is obtained by grouping percentiles in 5% increments, enabling construction of the complete CDF. Lifespan is then computed for arbitrary failure-defining percentiles and reference voltages through error-rate evaluation. Page-level lifetime distributions from the model reproduce the simulation mean and standard deviation with high fidelity. Although accuracy modestly degrades with increasing prediction horizon and at distribution tails due to recursive rollout, performance is sufficient for practical reliability assessment.
- Research Article
- 10.1515/jiip-2025-0051
- Jan 21, 2026
- Journal of Inverse and Ill-posed Problems
- Adetokunbo I Fadahunsi + 2 more
Abstract In this work, a unified approach for evaluating the European Put and Call options as well as the Barrier options from exponential moments, the moment recovered-Laplace transform (MR-LT) inversion method is introduced. (see also [24] and [25]). In addition, the insurance stop-loss premium and the bivariate probability density function and corresponding tail distribution of aggregate claims are approximated. Several examples are considered to illustrate the accuracy of newly defined approximations.