Related Topics
Articles published on Probability Density
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
47015 Search results
Sort by Recency
- New
- Research Article
- 10.1088/1361-6528/ae308c
- Jan 6, 2026
- Nanotechnology
- F R V Araújo + 6 more
We theoretically investigate the electronic transport properties of three-terminal ballistic junctions based on bilayer phosphorene nanoribbons, subjected to a uniform perpendicular electric field. We exploit the intrinsic anisotropy of phosphorene by considering different edge terminations for the nanoribbons that form such junctions, namely normal armchair, normal zigzag, skewed armchair, and skewed zigzag. Unlike bilayer graphene, the Bernal-stacked bilayer phosphorene, when subjected to an inversion symmetry breaking, such as the application of a perpendicular electric field, exhibits a semiconductor to metal transition, whereas in AB-stacked bilayer graphene, one observes a gap opening and a metal to semiconductor transition instead. Thus, by adopting this electric-field-controlled band gap strategy for bilayer phosphorene, we demonstrate the possibility of modulating the current flowing through bilayer-BP-based Y-junctions, redirecting it to one or both output terminals under specific conditions. The role played on the electron conductance and probability density currents by the different Y-junction constituents is also explored, and such results are interpreted in light of nanoribbons' dispersion relations. In this sense, the proposed system acts as a nanoscale switching device, and its current modulation effect can be used to develop phosphorene-based logic gates with a large on/off current ratio, benefiting from the material's high carrier mobility.
- New
- Research Article
- 10.1016/j.inffus.2025.103423
- Jan 1, 2026
- Information Fusion
- Yang Jiao + 2 more
Marginal distributionally robust fusion of probability density functions
- New
- Research Article
- 10.1016/j.ymssp.2025.113749
- Jan 1, 2026
- Mechanical Systems and Signal Processing
- Fei-Fan Meng + 2 more
Stochastic dynamic response analysis based on the probability density function expression of MDOF nonlinear system response
- New
- Research Article
- 10.1016/j.marpolbul.2025.118734
- Jan 1, 2026
- Marine pollution bulletin
- Ningte Chen + 4 more
Statistical characterization of low-frequency seabed ambient noise on the southwest Indian ridge based on passive acoustic observations.
- New
- Research Article
- 10.1016/j.epsr.2025.112296
- Jan 1, 2026
- Electric Power Systems Research
- Wanying Zhang + 1 more
Day-ahead hybrid probability density prediction of photovoltaic power generation based on improved similar days and MCQRNN-GAQ
- New
- Research Article
- 10.1016/j.fuel.2025.136077
- Jan 1, 2026
- Fuel
- Zhiwei Huang + 2 more
A numerical toolkit for the ignition delay time and ignition probability density predictions based on instantaneous mixing fields in OpenFOAM
- New
- Research Article
- 10.1109/lsp.2025.3640068
- Jan 1, 2026
- IEEE Signal Processing Letters
- Miroslav Kárný
Occam’s Razor in Pooling of Probability Densities
- New
- Research Article
- 10.5267/j.ijiec.2025.9.006
- Jan 1, 2026
- International Journal of Industrial Engineering Computations
- Bo Wang + 7 more
Within a three-level engineering supply chain that includes the owner, general contractor, and subcontractor, the optimal quality control strategy of the owner under symmetric, asymmetric, and incomplete information was studied. Using the quality control level of the general contractor and subcontractors, as well as the quality supervision level of the general contractor, and the quality supervision level of the owner as decision variables, and the cost function of each party as a quadratic function, the optimal quality control strategy of the owner under symmetric and asymmetric information is derived based on the maximum value method and Lagrange multiplier method. Under incomplete information, the optimal quality control strategy of the owner is derived when the probability density function of the general contractor's quality control level and quality supervision level follows a triangular distribution. Through simulation calculations, the results under different information conditions were analyzed.
- New
- Research Article
- 10.5267/j.ijiec.2025.12.002
- Jan 1, 2026
- International Journal of Industrial Engineering Computations
- Jian Ge + 2 more
Precision quantification is a core metric in industrial engineering (e.g., production quality control, sensor data calibration, automated assembly accuracy), where the traditional assumption of isotropic (homoscedastic) error variances often fails to capture real-world heteroscedastic characteristics (e.g., uneven measurement errors in assembly lines, divergent process variations in mass production). To address this critical discrepancy, this study develops a rigorous probabilistic framework for precision quantification in heteroscedastic normal populations, leveraging advanced distribution theory and numerical optimization. For the first time, the closed-form probability density function (pdf) and cumulative distribution function (cdf) of the planar precision index (PPI, defined as the modulus of a 2D heteroscedastic normal vector for industrial measurement data) are derived by integrating polar coordinate transformation with modified Bessel function theory. This resolves the long-standing absence of a strict analytical representation for this fundamental distribution, establishing a "first-principle" mathematical basis for industrial precision assessment. Building on this distributional foundation, a dual-tier computational framework is proposed: (1) A benchmark numerical solver that combines the bisection method (for convergence guarantee) and Brent’s algorithm (for superlinear efficiency) to yield exact precision index values, suitable for offline industrial system calibration; (2) A theoretically grounded linear approximation derived via moment matching and small-parameter perturbation, optimized for real-time production quality monitoring. This framework advances precision quantification from "ideal assumption-dependent models" to "data-driven, physics-consistent computation," and extends seamlessly to complex error structures in industrial scenarios (e.g., correlated sensor data, multimodal process variations). Theoretical analyses demonstrate that within the engineering-relevant variance ratio range (0.3–3.0), the average relative error of the approximation is constrained to <5%, with maximum error below 10%—well within industrial acPPItance thresholds. Validation via Monte Carlo simulations (100,000 trials) and field tests of automated welding processes confirms the method’s accuracy (mean absolute error <0.5%) and robustness. Compared to traditional homoscedastic methods, this approach reduces systematic bias in product qualification rate prediction by up to 23%, providing a reliable tool for industrial quality control and system certification.
- New
- Research Article
- 10.1177/09544070251406328
- Dec 31, 2025
- Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering
- Mingyue Li + 5 more
In order to improve the precision and reliability of the objective evaluation of vehicle vibration, a multi-condition drivability test is designed based on the GB/T4970-2009 standard. The three-axis vibration and acceleration data in the speed range of 30–120 km/h are collected from urban roads, national highways, and motorways. Combined with the improved empirical modal decomposition (EMD) method, a vibration signal optimization strategy with dynamic frequency band adjustment and double threshold screening is proposed. The empirical modal decomposition algorithm solves the problem of modal aliasing, establishes a speed-based frequency band adjustment mechanism and a suspension intrinsic frequency model, and constructs a dual-threshold filtering mechanism with instantaneous frequency probability density (≥85%) and energy entropy criterion, which effectively separates the driver operation from the noise interference. Compared with various improvement methods, the EMD dynamic dual-threshold method has the best correction effect, and the correction error of the integrated weighted acceleration root mean square value is reduced to 0.191%. The method breaks through the severe limitations of the traditional test conditions and provides a highly robust analysis framework for vehicle vibration comfort evaluation.
- New
- Research Article
- 10.1080/07853890.2025.2550582
- Dec 31, 2025
- Annals of Medicine
- Di Luo + 4 more
Background Positive urine cultures are common in urinary stone patients, yet tools for early infection prediction are limited. To address this gap, a user-friendly, dynamic online nomogram was developed to predict the incidence of positive urine cultures in patients with urolithiasis. Methods A retrospective study was conducted with 3,641 patients with urinary stones at the Second Hospital of Tianjin Medical University. The cohort was split into training and validation sets. Key variables were identified using Least Absolute Shrinkage and Selection Operator (LASSO) regression, while Random Forest and SHapley Additive exPlanations (SHAP) methods were applied to assess their importance. Online nomograms were developed and evaluated for performance through metrics such as area under the curve (AUC), calibration curve, decision curve analysis (DCA), probability density function (PDF), and clinical utility curve (CUC). Results Multivariate logistic analysis identified four significant predictors—bacteria (BACT), C-reactive protein (CRP), nitrite, and leukocyte esterase (LEU)—which were integrated into the nomogram. The AUC values for the overall, training, and validation sets were 90.53, 91.22, and 89.06%, respectively. Calibration curves confirmed the nomogram’s accuracy, and DCA demonstrated its superior performance over individual metrics. The PDF/CUC method revealed a threshold of 0.168, which effectively distinguished 88.54% of negatives from 78.70% of positives. Conclusions This dynamic online nomogram accurately predicts positive urine cultures in patients with urolithiasis, helping clinicians identify high-risk individuals , optimize antibiotic use, and improve patient outcomes. Further validation and biomarker exploration are needed to enhance its generalizability.
- New
- Research Article
- 10.1088/1361-6463/ae2ca0
- Dec 30, 2025
- Journal of Physics D: Applied Physics
- Duu Sheng Ong + 5 more
Abstract Accurately modelling of the exceptional low excess noise observed in AlxGa1-xAs0.56Sb0.44 Avalanche Photodiodes (APDs) is crucial for optimizing device performance. In this study, the Random Path Length (RPL) model, incorporating the Weibull-Fréchet (WF) distribution function, was used to simulate electron and hole impact ionisation in APDs with non-uniform electric fields. The model successfully reproduces the experimentally measured multiplication gain, 〈M〉 and excess noise factor, F, in electron-initiated APDs with compositions Al0.55Ga0.45As0.56Sb0.44, Al0.75Ga0.25As0.56Sb0.44, Al0.85Ga0.15As0.56Sb0.44, and AlAs0.56Sb0.44, while also predicting a steep increase in F for hole-initiated APDs. The results demonstrate that ionisation path length distributions are strongly influenced by electric field strength and alloy composition. The model effectively captures the probability density function (PDF) of ionisation path lengths, which is responsible for low excess noise. The results reveal that electron dead space increases as Al composition decreases, exhibiting an inverse trend compared to the reducing mean ionisation path length in these alloys. This behaviour is attributed to the alloy scattering effects, which become more pronounced in mid-composition alloys.
- New
- Research Article
- 10.3390/cli14010009
- Dec 30, 2025
- Climate
- Yihang Xing + 5 more
With global warming, tropical islands, as sensitive areas to climate change, exhibit new and significant temperature variation characteristics. Using the high-resolution Hainan Island Regional Reanalysis (HNR) dataset and multi-source data, this study analyzes temperature changes on Hainan Island from 1900 to 2022, focusing on spatiotemporal trends, diurnal patterns, and probability distribution shifts. The findings reveal significant periodic temperature changes: weak warming (0.02–0.08 °C/decade) from 1900 to 1949, a temperature hiatus from 1950 to 1979, and accelerated warming (0.14–0.28 °C/decade) from 1979 to 2022. Coastal plains (0.11 °C/decade) warm faster than inland mountains (0.08 °C/decade), reflecting oceanic and topographic effects. Diurnal temperature variations show topographic dependence, with a maximum range (8–9 °C) in the north during the warm season, and a southwest–northeast gradient in the cold season. Probability density function analysis indicates that the curves for transitional and cold seasons show a noticeable widening and rightward shift, reflecting the increasing frequency of extreme temperature events under the trend of temperature rise. The study also finds that the occurrence time of daily maximum temperature over coastal plains is advancing (−0.05 to −0.1 h/decade). This study fills gaps in understanding tropical island climate responses under global warming and provides new insights into temperature changes over Hainan Island.
- New
- Research Article
- 10.3390/math14010140
- Dec 29, 2025
- Mathematics
- Bao-Hua Liu + 2 more
This paper proposes a computationally efficient framework for estimating first-passage probabilities of nonlinear structures under stochastic seismic excitations. The methodology integrates Optimal Latinized Partially Stratified Sampling (OLPSS) with the Random Function Spectral Representation Method (RFSRM) to generate a minimal yet optimal set of samples in the low-dimensional input space. Each sample corresponds to a representative nonstationary ground motion time history, which is then used to drive nonlinear dynamic analyses. The extreme values of the structural responses are extracted, and their distribution tails are accurately modeled using the Shifted Generalized Lognormal Distribution (SGLD), whose parameters are efficiently estimated via an extrapolation method. This allows for the construction of the probability density function (PDF) and cumulative distribution function (CDF) of the extreme responses, from which the failure probabilities and reliability indices are calculated. The proposed framework is rigorously validated against the Monte Carlo simulation (MCS) benchmarks using two illustrative examples, including a nonlinear single-degree-of-freedom (SDOF) system and a three-story shear building model. The results demonstrate that the proposed method achieves excellent accuracy in estimating failure probabilities and reliability indices, while significantly reducing the number of required simulations and thereby confirming its high efficiency and accuracy for rapid performance-based seismic assessment.
- New
- Research Article
- 10.31489/2025n4/53-62
- Dec 29, 2025
- Eurasian Physical Technical Journal
- A.V Chepurnyi + 1 more
Gas turbines are essential for high-power energy generation, but growing demands to reduce NOₓ and CO₂ emissions make traditional combustion chamber design increasingly complex and costly. This work proposes a new modeling paradigm that combines high-fidelity Computational Fluid Dynamics using neural network learning to accelerate emission prediction. A Computational Fluid Dynamics model was developed using the Reynolds-averaged Navier-Stokes equations with the k–ε turbulence model and a non-premixed Probability Density Function approach to simulate turbulent methane combustion. NOₓ emissions were calculated post-simulation using the Zeldovich mechanism. Model validation included varying fuel flow, excess air ratio, and wall heat loss. To speed up evaluations, a multilayer perceptron neural network was trained on Computational Fluid Dynamics results to predict NOₓ and CO₂ emissions based on key inputs (fuel rate, air excess, temperature, pressure, cooling). The model achieved high accuracy with a coefficient of determination (R^2) of 0.998 for NOₓ and 0.956 for CO₂ on an independent test set. Results showed good agreement with both experimental data and a Network of ideal reactors model using detailed kinetic scheme of methane combustion - Mech 3.0. This neural network serves as a fast surrogate model for emissions assessment, enabling rapid optimization of low-emission combustor designs. The approach is suitable for digital twins and combustion control systems and is adaptable to alternative fuels like hydrogen and ammonia.
- New
- Research Article
- 10.3390/app16010309
- Dec 28, 2025
- Applied Sciences
- Changkun Ke + 3 more
This study aims to fill the existing gap in laser detection research, particularly regarding how the waveform of outgoing laser pulses affects detection performance. Based on the mechanism of light cone beam expansion, this study emits three different laser pulse signals to detect short-range targets. A theoretical model for short-range ranging of these lasers is established, and the effects of emission power, divergence angle, and equivalent root mean square noise voltage on circumferential detection accuracy are simulated and experimentally measured. As emission power decreases, both echo amplitude and detection accuracy decline for all three pulsed lasers. Additionally, except for the inverted parabolic function, both echo amplitude and detection accuracy decrease with reduced divergence angle. An increase in equivalent root mean square noise voltage broadens the half-width of the probability density distribution for pulsed laser detection. The mean central position deviation between the ideal and measured detection probability density distributions of the heavy-tailed function laser pulses shows the best performance and the highest fidelity, which are +0.01 m, +0.05 m, and +0.02 m, respectively, which is of great significance for the development of laser detection technology.
- New
- Research Article
- 10.1002/fld.70053
- Dec 28, 2025
- International Journal for Numerical Methods in Fluids
- Niklas Kühl
ABSTRACT This paper introduces a novel method for numerically stabilizing sequential continuous adjoint flow solvers utilizing an elliptic relaxation strategy. Unlike previous stabilization approaches, the proposed approach is formulated as a Partial Differential Equation (PDE) containing a single user‐defined parameter, which analytical investigations reveal to represent the filter width of a probabilistic density function or Gaussian kernel. Key properties of the approach include smoothing features with redistribution capabilities while preserving integral properties. The technique targets explicit adjoint cross‐coupling terms, such as the Adjoint Transpose Convection (ATC) term, which frequently causes numerical instabilities, especially on unstructured grids common in industrial applications. A trade‐off is made by sacrificing sensitivity consistency to achieve enhanced numerical robustness. The method is validated on a two‐phase, laminar, two‐dimensional cylinder flow test case at a Reynolds number of and Froude number of , focusing on minimizing resistance or maximizing lift. A range of homogeneous and inhomogeneous filter widths is evaluated. Subsequently, the relaxation method is employed to stabilize adjoint simulations during shape optimizations that aim at drag reduction of ship hulls. Two case studies are considered: A model‐scale bulk carrier traveling at and as well as a harbor ferry cruising at and in full‐scale conditions. Both cases, characterized by unstructured grids prone to adjoint divergence, demonstrate the effectiveness of the proposed method in overcoming stability challenges. The resulting optimizations achieve superior outcomes compared to approaches that omit problematic coupling terms, yielding stable and adjoint solutions of improved consistency even for complex, unstructured, two‐phase flow configurations. This demonstrates that the proposed elliptic relaxation strategy provides a practical and broadly applicable means to enhance the numerical robustness of segregated continuous adjoint solvers in industrial CFD environments.
- New
- Research Article
- 10.15826/chimtech.9515
- Dec 27, 2025
- Chimica Techno Acta
- Dmitry A Medvedev
In the natural sciences, scientific communication has traditionally relied on standard graphical formats, such as Scatter plots, bar charts, and 3D surface models as examples. While these formats are highly effective for conveying quantitative relationships, they often fall short when it comes to visualizing complex, multidimensional data interactions, collaborative networks, or the underlying distribution of variables. This limits the analytical narrative, as it becomes difficult to illustrate deeper contextual and relational insights. To address this issue, Chimica Technica Acta promotes the strategic use of supplementary infographics and is introducing a suite of tools to assist our authors. This editorial introduces three new web applications developed by the journal. The Word Cloud Generator transforms textual frequency data into visual overviews; the World Frequency Map creates heat maps and chord diagrams to display geographical distributions and international collaboration networks; the Data Visualization Tool with Marginal Distributions produces composite figures that reveal central trends alongside the probability density of each axis. These tools are designed to be user-friendly, allowing researchers to create publication-ready figures that enhance explanatory depth without requiring advanced programming skills. Integrating such visuals allows manuscripts to achieve greater clarity and narrative power, enabling graphics to function as integral components of scientific storytelling insted of traditional data reports. These tools support the journal's mission to promote precise, insightful, and accessible communication in the fields of chemistry and technology.
- New
- Research Article
- 10.3390/e28010035
- Dec 26, 2025
- Entropy
- Demetris Koutsoyiannis + 1 more
We investigate the fundamental trade-off between entropy and the Gini index within income distributions, employing a stochastic framework to expose deficiencies in conventional inequality metrics. Anchored in the principle of maximum entropy (ME), we position entropy as a key marker of societal robustness, while the Gini index, identical to the (second-order) K-spread coefficient, captures spread but neglects dynamics in distribution tails. We recommend supplanting Lorenz profiles with simpler graphs such as the odds and probability density functions, and a core set of numerical indicators (K-spread K₂/μ, standardized entropy Φμ, and upper and lower tail indices, ξ, ζ) for deeper diagnostics. This approach fuses ME into disparity evaluation, highlighting a path to harmonize fairness with structural endurance. Drawing from percentile records in the World Income Inequality Database from 1947 to 2023, we fit flexible models (Pareto–Burr–Feller, Dagum) and extract K-moments and tail indices. The results unveil a concave frontier: moderate Gini reductions have little effect on entropy, but aggressive equalization incurs steep stability costs. Country-level analyses (Argentina, Brazil, South Africa, Bulgaria) link entropy declines to political ruptures, positioning low entropy as a precursor to instability. On the other hand, analyses based on the core set of indicators for present-day geopolitical powers show that they are positioned in a high stability area.
- New
- Research Article
- 10.17654/0972361726001
- Dec 25, 2025
- Advances and Applications in Statistics
- A Arul Chezhian
In this paper, we introduce a new family of probability distributions called the Modi - G family and derive its linear representation of the family. By applying the exponential distribution within this framework, we develop a specific case termed the Sine Modi exponential (SME) distribution. This model generalizes the exponential distribution, offering enhanced flexibility for modeling lifetime and survival data. We derive some fundamental properties of the SME distribution, including its probability density function, cumulative distribution function, survival function, hazard rate, quantile function, linear representation, moments, moment generating function and order statistics. Parameter estimation is performed using the method of maximum likelihood. To evaluate the applicability of the proposed model, three real-world datasets are analyzed. The findings demonstrate that the proposed model outperforms several existing models considered in the study.