Articles published on Probability Density Functions
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
32488 Search results
Sort by Recency
- Research Article
- 10.1371/journal.pcbi.1013922
- Mar 13, 2026
- PLoS computational biology
- Sarah S Ji + 3 more
Copulas, generalized estimating equations, and generalized linear mixed models promote the analysis of grouped data where non-normal responses are correlated. Unfortunately, parameter estimation remains challenging in these three frameworks. Based on prior work of Tonda, we derive a new class of probability density functions that allow explicit calculation of moments, marginal and conditional distributions, and the score and observed information needed in maximum likelihood estimation. We also illustrate how the new distribution flexibly models longitudinal data following a non-Gaussian distribution. Finally, we conduct a tri-variate genome-wide association analysis on dichotomized systolic and diastolic blood pressure and body mass index data from the UK-Biobank, showcasing the modeling potential and computational scalability of the new distributional family.
- Research Article
- 10.1080/00207217.2026.2637982
- Mar 11, 2026
- International Journal of Electronics
- Sisira Hawaibam + 1 more
ABSTRACT Emerging wireless technologies demand high bit rate with lower latency and a guaranteed quality-of-service. Such technologies support real-time applications in fifth-generation and beyond networks under various delay constraints. In Shannon’s capacity, it is not assumed that no delays are introduced in the system. The effective bit rate (EBR) of delay-sensitive communication systems operating over shadowed Beaulieu–Xie (SBX) fading channels is analysed. SBX is a composite fading channel suitable for describing a channel encountered by millimetre-wave and terahertz wireless technologies. The mathematical expressions of EBR over SBX fading channels are expressed by using the probability density function method. The EBR analysis covers the influence of different fading and shadowing conditions, as well as varying delay constraints. The truncation error expression for the corresponding EBR expression is also derived. Further, the asymptotic expressions for high signal-to-noise ratio (SNR) as well as low SNR regimes are also derived and analysed to reveal the nature of the system. It also includes a comparison between capacity with no delays and capacity with delay constraint on the system.
- Research Article
- 10.3847/1538-4357/ae3d94
- Mar 9, 2026
- The Astrophysical Journal
- Zachary Davis + 3 more
Abstract Coherent structures created through turbulent cascades play a key role in energy dissipation and particle acceleration. In this work, we investigate both current and vorticity sheets in 3D particle-in-cell simulations of decaying relativistic turbulence in pair plasma by training a self-organizing map to recognize these structures. We subsequently carry out an extensive statistical analysis to reveal their geometric and structural properties. This analysis is systematically applied across a range of magnetizations ( σ ) and fluctuating-to-mean magnetic field strengths ( δB 0 / B 0 ) to assess how these parameters influence the resulting structures. We find that the structures’ geometric properties form power-law distributions in their probability density functions, with the exception of the structure width, which generally exhibits an exponential distribution peaking around two electron skin depths. The measurements show a weak dependence on σ but a strong dependence on δB 0 / B 0 . Finally, we investigate the spatial relationship between current sheets and vorticity sheets. We find that most current sheets are directly associated with at least one vorticity sheet neighbor and are often situated between two vorticity sheets. These findings provide a detailed statistical framework for understanding the formation and organization of coherent structures in relativistic magnetized turbulence, allowing for their incorporation into updated theoretical models for structure-based energy dissipation and particle acceleration processes crucial for interpreting high-energy astrophysical observations.
- Research Article
- 10.1080/21681724.2026.2637180
- Mar 8, 2026
- International Journal of Electronics Letters
- Ledarwel Wahlang + 1 more
ABSTRACT In this paper, the performance analysis of wireless receivers over slow-flat Hoyt-Gamma fading channels is considered on a Selection Combining (SC) receiver. This analysis predicts the signal reception strength at the receiver end, and with the use of performance metrics like Average Bit Error Rate (ABER), we can understand the correct prediction at the receivers across the different diversity combiners like SC, Maximal Ratio Combining (MRC), Switch-and-Stay Combining (SSC) and Equal Gain Combining (EGC). It is performed with the use of the Probability Density Function (PDF) approach by considering the 2-antenna diversity reception system.
- Research Article
- 10.1016/j.mbs.2026.109660
- Mar 7, 2026
- Mathematical biosciences
- Y He + 1 more
Sensitivity and uncertainty analyses for model of olfaction using Dempster-Shafer theory and neural networks.
- Research Article
- 10.1021/acs.jpclett.5c04117
- Mar 5, 2026
- The journal of physical chemistry letters
- Meng Yan + 3 more
The accuracy of stochastic algorithms for electron correlation energy calculations critically depends on the proper treatment of singularities and long-range tails arising from the two-electron Coulomb operator. In this work, an enhanced Monte Carlo approach is developed that constructs a tailored sampling distribution via a progressive residual fitting (PRF) strategy within the importance-sampling framework, termed MC@PRF. Two key techniques, selective scaling for tail amplification and the Top 5 Rule for singularity regularization, enable robust and accurate construction of the sampling probability density function without additional computational cost. Comprehensive numerical tests demonstrate that MC@PRF substantially reduces statistical errors and accelerates convergence while maintaining high accuracy, from low-dimensional benchmark functions to twelve-dimensional second-order Møller-Plesset correlation energy calculations. Moreover, MC@PRF naturally supports automatic stratified sampling, adaptively allocating computational effort between singular and regular regions without prior knowledge of the integrand's structure, thereby establishing a general and efficient Monte Carlo framework for complex integrals in quantum chemistry and related fields.
- Research Article
- 10.1090/tran/9622
- Mar 4, 2026
- Transactions of the American Mathematical Society
- Sung-Soo Byun + 2 more
The eigenvalue probability density function of the Gaussian unitary ensemble permits a q q -extension related to the discrete q q -Hermite weight and corresponding q q -orthogonal polynomials. A combinatorial counting method is used to specify a positive sum formula for the spectral moments of this model. The leading two terms of the scaled 1 / N 2 1/N^2 genus-type expansion of the moments are evaluated explicitly in terms of the incomplete beta function. Knowledge of these functional forms allows for the smoothed leading eigenvalue density and its first correction to be determined analytically.
- Research Article
- 10.3390/jcp6020045
- Mar 3, 2026
- Journal of Cybersecurity and Privacy
- Sergey Davydenko + 2 more
Continuous authentication is a promising method for protecting computer systems in the event of compromise of primary authentication factors, such as passwords or tokens. Systems employing continuous authentication that rely on biometrics may not be restricted to a single biometric characteristic; rather, they can simultaneously utilize multiple characteristics and subsequently arrive at a conclusive decision based on their collective analysis outcomes. One of the significant challenges researchers encounter when investigating effective fusion in decision-making is the lack of data. At present, data generation primarily involves the creation of feature vectors or attack simulation. This paper introduces a method for directly generating distances derived from a Siamese neural network, utilizing the probability density function of an existing distribution. Through statistical analysis, we successfully generated 5000 samples that correspond to the initial distribution, which were then employed to discover the threshold values at which FAR and FRR were less than 1%. The methods developed can be further applied to identify the most efficient strategies for integrating the results of continuous authentication in systems that incorporate multiple biometric characteristics.
- Research Article
- 10.3842/umzh.v78i1-2.6690
- Mar 2, 2026
- Ukrains’kyi Matematychnyi Zhurnal
- Mohammad W Alomari
UDC 517.5 We extend the classical Gr\"{u}ss inequality from Euclidean disks in $\mathbb{R}^2$ to generalized $p$-disks and present rigorous formulations for functions satisfying H\"{o}lder-type conditions. Several new Gr\"{u}ss-type inequalities are established and applied to establish bounds for the covariance of random variables defined either by probability density functions or by joint distributions.
- Research Article
- 10.1080/03610926.2026.2626157
- Mar 2, 2026
- Communications in Statistics - Theory and Methods
- C Satheesh Kumar + 1 more
Compound distributions are well documented in modern statistical modeling. These families of distributions are essential in finance and economics in modeling aggregate claims, losses, risks, and queuing processes. In this work, we propose a four-parameter compound distribution developed as the sum of N normally distributed random variables, where each comes from distinct random occurrences following hyper-Poisson (HPD) distribution. We call it hyper-Poisson normal (HPN) distribution. The proposed HPN distribution exhibits tremendous flexibility in modeling. The probability density function can take positively skewed, negatively skewed, and symmetric properties. The normal distribution and the zero-truncated Poisson normal (ZTP-N) distributions are special cases. Moreover, it can be unimodal, bimodal, or multimodal. It has four model parameters. The properties of the distribution, including the moment-generating function, moments, characteristic function, and order statistics, were easily obtained explicitly. The maximum likelihood estimators of the unknown parameters of the distribution were computed using the EM algorithm, due to the convolution nature of it hindering explicit calculations. Moreover, simulation tests have been carried out using the EM algorithm to investigate the effectiveness of the distribution. Finally, real data analyses were performed to demonstrate the applicability of the distribution.
- Research Article
- 10.1016/j.neunet.2025.108268
- Mar 1, 2026
- Neural networks : the official journal of the International Neural Network Society
- Tieliang Gong + 4 more
Nyström-aware approximations for matrix-based Rényi's entropy.
- Research Article
- 10.1016/j.ultras.2025.107865
- Mar 1, 2026
- Ultrasonics
- Huanchao Du + 4 more
Noncontact ultrasonic materials identification based on improved frequency responses.
- Research Article
- 10.3390/fractalfract10030155
- Feb 27, 2026
- Fractal and Fractional
- Nesreen M Al-Olaimat + 4 more
The Ujlayan–Dixit (UD) fractional calculus provides a powerful fractional extension of the Lomax distribution, offering a suitable framework for representing complex behaviors beyond classical approaches. In this paper, we adopt the UD fractional Lomax distribution and establish its statistical theory. Based on the adopted density, we derive closed-form expressions for the cumulative distribution, survival, and hazard functions, as well as the mode. Several UD fractional statistical measures of the Lomax random variable are derived, including the fractional moments, fractional information theoretic measures, including UD fractional Shannon and Tsallis entropy measures, and the probability density function of the kth order statistic under the UD fractional framework. Finally, a real data application concerning the time to break down an insulating fluid is used to illustrate the usefulness of the proposed distribution in modeling real data applications. The fitting performance of the suggested model is compared with several extensions of the Lomax distribution. The comparative results show that the UD fractional Lomax distribution outperforms several well-known extensions of Lomax distribution. This framework provides researchers with many robust tools for advanced reliability assessment, uncertainty quantification, and risk modeling, providing insights into phenomena not captured by the classical Lomax distribution. Moreover, when the fractional parameter q→1−, the proposed approach converges to the classical Lomax results, bridging fractional and classical perspectives.
- Research Article
- 10.1088/1361-6560/ae456e
- Feb 26, 2026
- Physics in Medicine & Biology
- Seyed Amir Zaman Pour + 5 more
Objective.Quantitative imaging in positron emission tomography (PET) requires accurate, precise, and efficient scatter correction techniques. Conventional scatter estimation typically relies on single-scatter simulation (SSS) combined with a tail-fitting strategy. However, the accuracy of tail-fitted SSS is limited, for example, by mismatches between the attenuation image and the PET emission data or by the presence of activity outside the field of view (FOV). These shortcomings can be addressed using energy-based scatter estimation (EBSE), as recently proposed by Efthimiouet al(2022Phys. Med. Biol.67095010); Hamillet al(2024Med. Phys.5154-69). The aim of this work is to (1) investigate the accuracy of EBSE by accounting for the line-of-response dependence of the energy spectrum of unscattered photons, (2) improve the computational speed of EBSE through better initialization and a more efficient optimization algorithm, and (3) validation and characterization of EBSE using a three-basis model across different object sizes and activity distributions.Approach.The proposed improved EBSE method models the energy spectrum of scattered photons with two probability density functions (PDFs), and incorporates a position-dependent (local) energy PDF for unscattered photons. These energy PDFs form the basis of a forward model-a linear nine-parameter model -used for scatter estimation based on 2D energy histograms. The performance of the EBSE was evaluated using GATE Monte Carlo simulations and a NEMA phantom acquisition on a GE SIGNA PET/MR scanner. Furthermore, we assessed the stability of EBSE across the forward model by varying the number of counts in the 2D energy histograms via data mashing.Main results.EBSE reduced artifacts caused by out-of-FOV activity and demonstrated performance comparable to tail-fitted SSS in other regions. Incorporating a local unscattered PDF improved off-center quantification, and NEGML with improved initialization plus histogram down-sampling substantially reduced computation without compromising accuracy. Limitations were observed: the proposed basis-function model for scattered-photon energy spectra lacks full generality across attenuation and activity distributions.Significance.This study improves the accuracy and computational efficiency of EBSE for clinically realistic activity and attenuation conditions, while clarifying scattered basis function limitations.
- Research Article
- 10.3390/electronics15050912
- Feb 24, 2026
- Electronics
- Shuteng Duan + 3 more
To address the problem that the detection performance of existing spectrum sensing algorithms degrades or even fails under impulsive noise, this paper proposes a generalized energy detection-based spectrum sensing algorithm. Theoretical analysis verifies that the proposed algorithm can effectively mitigate the adverse effects of impulsive noise, realize high-precision signal detection, and enhance system reliability with fewer samples. Furthermore, through statistical theoretical analysis, the probability density function of the detection statistic is provided for both scenarios where the primary user signal is absent and present. The probabilities of false alarm and missed detection are also derived, and the threshold corresponding to a prescribed false alarm probability is determined. Finally, simulation results demonstrate the effectiveness of the generalized energy detection algorithm.
- Research Article
- 10.1103/fs8d-h2z8
- Feb 23, 2026
- Physical Review A
- Keisuke Asahara + 3 more
The weak limit theorem (WLT), the quantum analog of the central limit theorem, is foundational to quantum walk (QW) theory. Unlike the universal Gaussian limit of classical walks, deriving analytical forms of the limiting probability density function (PDF) in higher dimensions has remained a challenge since the one-dimensional (1D) Konno distribution was established. Previous explicit PDFs for two-dimensional (2D) models were limited to specific cases whose fundamental nature was unclear. This paper resolves this long-standing gap by introducing the notion of maximal speed v max as a critical parameter. We demonstrate that all previous 2D solutions correspond to a degenerate regime where v max = 1 . We then present the first exact analytical representation of the limiting PDF for the physically richer, unexplored regime v max < 1 of a general class of 2D two-state QWs. Our result reveals 2D Konno functions that govern these dynamics. We establish these as the proper 2D generalization of the 1D Konno distribution by demonstrating their convergence to the 1D form in the appropriate limit. Furthermore, our derivation, based on spectral analysis of the group-velocity map, analytically resolves the singular asymptotic structure: we explicitly determine the caustics loci where the PDF diverges and prove they define the boundaries of the distribution's support. By also providing a closed-form expression for the weight functions, this work offers a complete description of the 2D WLT.
- Research Article
- 10.33232/001c.158200
- Feb 23, 2026
- The Open Journal of Astrophysics
- Jan Luca Van Den Busch + 32 more
Virtually all extragalactic use cases of the Vera C. Rubin Observatory’s Legacy Survey of Space and Time (LSST) require the use of galaxy redshift information, yet the vast majority of its sample of tens of billions of galaxies will lack high-fidelity spectroscopic measurements thereof, instead relying on photometric redshifts (photo- z ) subject to systematic imprecision and inaccuracy best encapsulated by photo- z probability density functions (PDFs). We present the version 1 release of Redshift Assessment Infrastructure Layers (RAIL), an open source Python library for at-scale probabilistic photo- z estimation, initiated by the LSST Dark Energy Science Collaboration (DESC) with contributions from the LSST Interdisciplinary Network for Collaboration and Computing (LINCC) Frameworks team. RAIL’s three subpackages provide modular tools for end-to-end stress-testing, including a forward modeling suite to generate realistically complex photometry, a unified API for estimating per-galaxy and ensemble redshift PDFs by an extensible set of algorithms, and built-in metrics of both photo- z PDFs and point estimates. RAIL serves as a flexible toolkit enabling the derivation and optimization of photo- z data products at scale for a variety of science goals and is not specific to LSST data. We thus describe to the extragalactic science community, including and beyond Rubin the design and functionality of the RAIL software library so that any researcher may have access to its wide array of photo- z characterization and assessment tools.
- Research Article
- 10.1002/joc.70307
- Feb 22, 2026
- International Journal of Climatology
- P Rohini + 3 more
ABSTRACT This study investigates long‐term changes in wet bulb temperature (WBT) along the Indian coasts during 1981–2020, together with associated variations in dry bulb temperature (DBT) and specific humidity ( q ). The analysis is carried out separately for the four seasons—January–March (JFM), April–June (AMJ), July–September, and October–December (OND). The results show a significant rise in the seasonal daily maximum WBT across all seasons, with the most pronounced increase along the East Coast (EC), particularly during the AMJ season. On the West Coast (WC), WBT trends are primarily driven by DBT, whereas on the EC, both DBT and humidity contribute strongly to the observed WBT changes. Probability density function analysis indicates a persistent warming and moistening tendency across the coastal regions. Extremes of WBT, DBT, and relative humidity have intensified, with a marked escalation after 2000. The findings highlight global warming as the dominant driver of WBT changes. Notably, the percentage increase in q per 1°C warming is greater along the EC (up to 8.3%) than the WC (up to 6.5%), especially during the cooler seasons (JFM and OND). These results highlight that continued warming will likely intensify coastal heat stress across seasons, posing growing risks to communities along India's coasts.
- Research Article
- 10.1177/09576509261428888
- Feb 21, 2026
- Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Power and Energy
- Yue Shu + 3 more
In this paper, the surge characteristics of a centrifugal compressor are investigated under various operating conditions through experimental analysis. The stable operational range of the centrifugal compressor at low flow rates is predominantly constrained by the presence of unstable phenomena such as stall and surge. The aerodynamic features spanning from the stall stage (preceding surge occurrence) to the surge stage (following surge occurrence) are comprehensively documented within the experimental framework. The power spectrum density (PSD) of mass flow rate and pressure is utilized to elucidate the fluid dynamics within the centrifugal compressor at deep surge condition (5° valve opening), moderate surge condition (12°), and mild surge condition (15°), respectively. Subsequently, a thorough comparison is conducted between the time-domain and frequency-domain distributions of pressure fluctuations during the transitional zone between the stall and surge stages. The evolution trend of mass flow rate and pressure are compared at deep, moderate, and mild surge conditions, and the transition zone between the stall stage and the surge stage, which could be detected in advance and contribute to prevent surge, is pointed out. The utilization of probability density function (PDF) distribution is implemented to deeply explore a detailed examination of pressure fluctuation disparities between the stall and surge stages, elucidating the impact of surge-induced fluctuations on flow stability within the centrifugal compressor. The characteristics of shaft vibration are deeply explored. The above comparison can provide a profound insight into understanding the physical mechanism of centrifugal compressor at surge conditions.
- Research Article
- 10.1088/1361-6560/ae3c53
- Feb 20, 2026
- Physics in Medicine & Biology
- Jingyan Xu + 1 more
Objective.We propose a new formulation for ideal observers (IOs) that incorporate stochastic object models (SOMs) for data acquisition optimization.Approach. A data acquisition system is considered as a (possibly nonlinear) discrete-to-discrete mapping from a finite-dimensional object space,x∈Rnd, to a finite-dimensional measurement space,y∈Rm. For binary tasks, the two underlying SOMs,H0andH1, are specified by two probability density functions (PDFs)p0(x),p1(x). This leads to the notion of intrinsic likelihood ratio (LR)ΛI(x)=p1(x)/p0(x)and intrinsic class separability (ICS), the latter quantifies the population separability that is independent of data acquisition. With respect to ICS, the IO employs the 'extrinsic' LRΛ(y)=pr(y|H1)/pr(y|H0)of the data and quantifies the extrinsic class separability (ECS). The difference between ICS and ECS measures the efficiency of data acquisition. We show that the extrinsic LRΛ(y)is the expectation of the intrinsic LRΛI(x), where the expectation is with respect to the posterior PDFpr(x|y,H0)underH0.Main results. We use two examples, one to clarify the new IO and the second to demonstrate its potential for real world applications. Specifically, we apply the new IO to spectral optimization in dual-energy CT projection domain material decomposition (pMD), for which SOMs are used to describe variability of basis material line integrals. The performance rank orders obtained by IO agree with physics predictions.Significance.The main computation in the new IO involves sampling from the posterior PDFpr(x|y,H0), which are similar to (fully) Bayesian reconstruction. Thus our IO computation is amenable to standard techniques already familiar to CT researchers. The example of dual-energy pMD serves as a prototype for other spectral optimization problems, e.g., for photon counting CT or multi-energy CT with multi-layer detectors.