Related Topics
Articles published on Statistical Properties
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
29616 Search results
Sort by Recency
- New
- Research Article
- 10.1109/tpami.2025.3607773
- Jan 1, 2026
- IEEE transactions on pattern analysis and machine intelligence
- Jinwei Yao + 3 more
Stochastic Kriging (SK) is a generalized variant of Gaussian process regression, and it is developed for dealing with non-i.i.d. noise in functional responses. Although SK has achieved substantial success in various engineering applications, its intrinsic modeling strategy by focusing on the sample mean limits its flexibility and capability of predicting individual functional samples. Moreover, the performance of SK can be impaired under scarce data scenarios, which are commonly encountered in engineering applications, especially for start-up or just deployed systems. In this paper, we propose a novel transfer learning framework to address the challenges of individualization and data scarcity in traditional SK. The proposed framework features a within-process model to facilitate individualized prediction and a between-process model to leverage information from related processes for resolving the issue of data scarcity. The within- and between-process models are integrated through a tailored convolution process, which quantifies interactions within and between processes using a specially designed covariance matrix and corresponding kernel parameters. Statistical properties are investigated on the parameter estimation of the proposed framework, which provide theoretical guarantees for the performance of transfer learning. The proposed method is compared with benchmark methods through various numerical and real case studies, and the results demonstrate the superiority of the proposed method in dealing with individualized prediction of functional responses, especially when limited data are available in the process of interest. The reproducibility code is available in the supplementary materials.
- New
- Research Article
- 10.1257/aer.20231394
- Jan 1, 2026
- American Economic Review
- Jaden Yang Chen
This paper investigates a sequential social learning problem in which individuals face ambiguity about others’ signal structures and have max-min expected utility preferences, thereby exhibiting ambiguity aversion. Unlike previous findings, which suggest that learning outcomes depend on the specifics of the learning environment, this study establishes information cascades as a robust outcome under ambiguity. With sufficient ambiguity, cascades arise almost surely, regardless of the statistical properties of signal structures. Moreover, standard results predicting the absence of cascades can easily break down: Even minimal ambiguity can trigger cascades when signals are bounded and lead to incorrect herding when signals are unbounded. (JEL D81, D82, D83)
- New
- Research Article
- 10.3847/1538-4357/ae21d4
- Dec 31, 2025
- The Astrophysical Journal
- Giuseppina Nigro + 6 more
Abstract Rapidly rotating late M dwarfs are observed in two different branches of magnetic activity, although they operate in the same stellar parameter range. Current empirical evidence indicates that M dwarfs with spectral types ranging from M3/M4 to late-type M dwarfs, stellar masses smaller than 0.15 M ⊙ , and rotational periods shorter than 4 days display either a stable dipolar magnetic field or magnetic structures with significant time variability. The magnetic activity of fully convective M dwarfs is known to be regulated by a mechanism named the α 2 dynamo. To further constrain the physics of this mechanism, we use a low-dimensional model for thermally driven magnetoconvection producing an α 2 dynamo, specifically a modified magnetohydrodynamic shell model. Although the model neglects density stratification, it captures the essential nonlinear dynamics of an α 2 dynamo. Therefore, the results should be interpreted in a qualitative sense, highlighting possible trends rather than providing direct quantitative predictions for fully convective stars. The model is validated by comparing the statistical properties of magnetic polarity reversals with paleomagnetic data, since the geodynamo provides the only natural α 2 dynamo with sufficiently rich reversal statistics. Our findings reveal that increased convective heat transport correlates with more frequent magnetic-polarity reversals, resulting in enhanced magnetic variability. This suggests that the observed magnetic dichotomy in late M dwarfs could be interpreted in terms of differences in global heat transport efficiency. However, additional models and observations of M dwarfs are needed to further constrain this interpretation.
- New
- Research Article
- 10.30829/zero.v9i3.25692
- Dec 29, 2025
- ZERO: Jurnal Sains, Matematika dan Terapan
- Mohamad Khoirun Najib + 7 more
<span lang="EN">Modeling rainfall is crucial for hydrological studies and climate adaptation, especially in regions with complex topography such as the Toba Lake area, North Sumatra. Classical probability distributions often struggle to represent skewness, heavy tails, and variability observed in tropical rainfall. This study explores APTXL distribution as a flexible two-parameter model. Through the alpha power transformation, APTXL extends the X-Lindley distribution by introducing an additional shape parameter, allowing better accommodation of asymmetrical and extreme values while maintaining analytical tractability. Statistical properties are derived, and parameters are estimated using maximum likelihood. The model is applied to a long-term dataset from 13 meteorological stations, covering 408 monthly observations per station. Comparative analysis against Gamma, Lognormal, and Generalized Extreme Value distributions using multiple goodness-of-fit criteria indicates that APTXL provides consistently improved performance. These results suggest APTXL as a practical tool for rainfall modeling and water-resource applications in climate-sensitive regions.</span>
- New
- Research Article
- 10.1287/msom.2023.0381
- Dec 29, 2025
- Manufacturing & Service Operations Management
- Lei Guan + 3 more
Problem definition: This paper considers the operations management problems under a newly proposed choice model referred to as a focal multinomial logit (FMNL) model. It generalizes the famous multinomial logit (MNL) model and various well-studied consideration-set choice models and can effectively capture irrational choice behaviors such as the context effect, halo effect, and choice overload, as well as the effect of focality. Methodology/results: We focus on the threshold focal set and various focal parameter settings, including the constant, cardinality, and linear threshold FMNL models, as well as a broader model that satisfies certain regularity conditions and subsumes the above models. We analyze the computational complexity and propose polynomial-time exact or approximation algorithms for assortment optimization problems under different focal parameters. We then characterize the optimal strategy for the joint price and assortment optimization problem. Our investigation into the statistical properties of maximum-likelihood estimators addresses identifiability, consistency, and convergence rates, as well as their implications on operations decisions. We also present a convex mixed-integer nonlinear programming reformulation method that achieves a global optimal estimator for model calibration. Managerial implications: Through extensive numerical experiments on synthetic and real data sets, we demonstrate the efficiency of the proposed algorithms, highlight the issues of model misspecification, and reveal revenue improvement under the family of FMNL models. Our analyses suggest that retailers should consider the impact of focality to potentially improve demand estimation accuracy and operations performance. Funding: L. Guan acknowledges financial support from the Fundamental Research Funds for the Central Universities [Grant 2025CX13014]. K. Nip acknowledges financial support from the National Natural Science Foundation of China [Grant 72571183]. L. Zhang acknowledges financial support from the National Natural Science Foundation of China [Grant 72471156] and the Hong Kong Research Grant Council [Grant GRF 16209923]. Supplemental Material: The online appendices are available at https://doi.org/10.1287/msom.2023.0381 .
- New
- Research Article
- 10.3390/math14010120
- Dec 28, 2025
- Mathematics
- Ammar M Sarhan + 3 more
This paper introduces a novel bivariate distribution, referred to as the Bivariate Burr XII Inverse Weibull (BBXII-IW) distribution, constructed via the Marshall–Olkin approach from the univariate Burr XII Inverse Weibull (BXII-IW) distribution. The proposed BBXII-IW model provides a flexible framework for modeling dependent bivariate data, including competing risk scenarios. The key statistical properties of the distribution are derived, and parameter estimation is conducted using the maximum likelihood method. The model’s performance is evaluated using two types of real-world datasets: (1) bivariate data and (2) dependent competing risk data related to diabetic retinopathy. The results demonstrate that the BBXII-IW distribution offers an improved fit compared to existing models, highlighting its flexibility and practical relevance in modeling complex dependent structures.
- New
- Research Article
- 10.62051/gdz6ev13
- Dec 25, 2025
- Transactions on Computer Science and Intelligent Systems Research
- Boyuan Sun
Single-particle escape in tri-stellar gravitational systems is investigated using large ensembles of direct integrations that sample broad ranges of initial specific energy E and angular momentum L. It constructs outcome maps over (E, L) and interprets their structure with the aid of a co-rotating (synodic) frame, where the Jacobi-like integral and zero-velocity surfaces (ZVS) provide geometric diagnostics of accessible channels near the classical necks. Two outcome regimes emerge robustly across our experiments: (i) prompt ejection following a single strong passage and (ii) long-lived chaotic transients that persist for many binary periods before escaping. Ensemble statistics show an early-time peak in escape events accompanied by a heavy tail in residence times. The escape probability increases systematically with higher initial energy and lower angular momentum, reflecting the combined roles of surplus kinetic energy and a reduced centrifugal barrier. By coarse-graining the (E, L) plane, it resolves fractal-like escape basins bounded by trapped regions, consistent with sensitive dependence on initial conditions. Throughout, Jacobi/ZVS arguments are used as geometric guides rather than strict invariants, allowing a unified description that connects inertial-frame energy criteria to rotating-frame accessibility. The approach is intentionally minimal—Newtonian point masses and idealized initial families—yet yields practical summaries for evaporation, capture/escape statistics, and rapid screening of initial conditions in multi-body environments.
- New
- Research Article
- 10.3390/fire9010015
- Dec 25, 2025
- Fire
- Han Li + 3 more
The frequent occurrence of fires has prompted China to accelerate the development of community fire prevention and emergency management systems. Language, serving both communicative and affective functions by facilitating the flow of information and fostering mutual understanding, runs through the entire process of community fire emergency management. In response to the early-stage nature of this field and the lack of a systematic framework, this study constructs a dynamic capability evaluation system for urban community fire-related emergency language services (FELS) by integrating multi-source and heterogeneous data. First, by adopting a hybrid approach combining dynamic capability theory and text mining, a three-level indicator system is established. Second, based on domain knowledge, quantitative methods and scoring rules are designed for the third-level qualitative indicators to provide standardized input for the model. Third, a weighting and integration framework is developed that simultaneously considers the internal mechanism characteristics and statistical properties of indicators. Specifically, a knowledge-driven weighting approach combining FAHP and fuzzy DEMATEL is employed to characterize indicator importance and interrelationships, while the CRITIC method is used to extract Data-Driven weights based on data dispersion and information content. These knowledge-driven and Data-Driven weights are then integrated through a multi-feature fusion weighting approach. Finally, a linear weighting model is applied to combine the normalized indicator values with the integrated weights, enabling a systematic evaluation of the dynamic capabilities of community FELS. To validate the proposed framework,, application tests were conducted in four representative types of urban communities, including internationally developed, aging and vulnerable, newly developed, and economically diverse communities, using fire emergency scenarios as the entry point. The external validity and internal robustness of the proposed model were verified through these tests. The results indicate that the evaluation system provides accurate, objective, and adaptive assessments of dynamic capabilities in FELS across different community contexts, offering a governance-oriented quantitative tool to support grassroots fire prevention and to enhance community resilience.
- New
- Research Article
- 10.3847/1538-4357/ae1cc8
- Dec 24, 2025
- The Astrophysical Journal
- Suziye He + 8 more
Abstract We analyze the hierarchical structure in the Rosette molecular cloud using 13 CO J = 1–0 data from the Milky Way Imaging Scroll Painting survey with a nonbinary Dendrogram algorithm that allows multiple branches to emerge from parent structures. A total of 588 substructures are identified, including 458 leaves and 130 branches. The physical parameters of the substructures—including peak brightness temperature ( T peak ), brightness temperature difference ( T diff ), radius ( R ), mass ( M ), velocity dispersion ( σ v ), and surface density (Σ)—are characterized. The T peak and T diff distributions follow exponential functions with characteristic values above 5 σ rms . The statistical properties and scaling relations—i.e., the σ v – R , M – R , and σ v – R Σ relations—are in general consistent with those from traditional segmentation methods. The mass and radius follow power-law distributions with exponents of 2.2–2.5, with slightly flatter slopes for substructures inside the H ii region. The velocity dispersion scales weakly with radius ( σ v ∝ R 0.45±0.03 , r = 0.58) but shows a tighter correlation with the product of surface density and size ( σ v ∝ (Σ R ) 0.29±0.01 , r = 0.73). Self-gravitating substructures are found across scales from ∼0.2 to 10 pc, and nearly all structures with peak brightness above 4 K are gravitationally bound ( α vir < 2). The fraction of bound structures increases with mass, size, and surface density, supporting the scenario of global hierarchical collapse for the evolution of molecular clouds, in which molecular clouds and their substructures are undergoing multiscale collapse.
- New
- Research Article
- 10.11648/j.ajmcm.20251004.13
- Dec 24, 2025
- American Journal of Mathematical and Computer Modelling
- Parthasarathy Srinivasan
One of the most pervasive applications in Computing, is the generation of Random numbers, which belong to a certain probability distribution such as a Gaussian (normal) distribution. These probability distributions possess statistical properties such as expected values (mean), variance (standard deviation), p-value, Entropy etc.; out of which Entropy is significant, for quantifying the amount of (useful) information, that a particular instance of a distribution embodies. This quantification of Entropy is of value as a characterizing metric, which determines the amount of randomness/uncertainty and/or redundancy that can be achieved using a particular distribution instance. This is particularly useful for communication, cryptographic and astronomical applications in this day and age. In the present work the Author introduces an alternate way to calculate the approximate value of the Information Entropy (with a variation to the formulation of Information Entropy by Claude Shannon, as known by the scientific community); by observing that a Takens embedding of the probability distribution yields a simple measure of the Entropy; by taking into consideration only four critical/representative points of the embedding. By comparative experimentation, the Author has been able to empirically verify that this alternate formulation is consistently valid: The baseline experiment chosen relates to Discrete Task Oriented Joint Source Channel Coding (DT-JSCC) which utilizes entropy computation to perform efficient and reliable task oriented communication (transmission and reception) as will be elaborated further. The author performed the comparison by employing the Shannon formulation for Entropy computation in the baseline DT-JSCC experiment and then repeating the experiment by employing the Entropy formulation, introduced in this work. Eventually, the accuracy of results obtained (data models generated) were almost identical (differing in accuracy by only ~ 1% overall). Thus, the alternate formulation introduced in this work, provides a reliable means of validating the random numbers obtained from the Shannon formulation and also potentially serves as a simpler, faster, and more computationally optimal method. This is particularly useful in applications, where there is a constraint on the computational resources available, such as mobile and limited devices. The method is also useful as a way of uniquely identifying and characterizing Random probability sources, such as those from astronomical and/or optical (photonic) phenomenon. The author also investigates the impact of incorporating the above notion of Entropy into the Mars Rover IER software and confirms the conclusions in the original article from Jet Propulsion Laboratories, NASA, which describes the ICER Progressive Wavelet Image Compressor.
- New
- Research Article
- 10.54254/2755-2721/2026.tj30782
- Dec 24, 2025
- Applied and Computational Engineering
- Tianhao Huang
Massive Multiple-Input Multiple-Output (Massive MIMO) serves as a foundational enabling technology for 5G and future communication systems, markedly boosting spectral and energy efficiency through the deployment of large-scale antenna arrays. However, the scaling-up of antenna arrays has led to a substantial increase in system power consumption and hardware costs, with high-precision analog-to-digital converters (ADCs) emerging as the dominant power consumption bottleneck in the radio frequency chain. To alleviate system complexity and power consumption, low-resolution ADCs (13 bits) have attracted extensive research interest in recent years. Such schemes can substantially curtail hardware costs and energy consumption while retaining satisfactory system performance. Nevertheless, the introduction of severe nonlinear distortion due to low-precision quantization disrupts the linear Gaussian model assumption upon which traditional receiver algorithms rely, resulting in compromised channel estimation and signal detection performance. Quantization errors demonstrate non-Gaussian and input-dependent characteristics, leading to the degradation of amplitude information and thus constraining the applicability of technologies such as high-order modulation and high-precision sensing. This paper presents a systematic review of low-precision quantization techniques for Massive MIMO. It first investigates the impacts of low-bit quantization on system models and signal statistical properties. Subsequently, it elaborates on transceiver architectures and key design challenges pertaining to low-precision ADCs/DACs. The paper highlights signal processing and algorithmic strategies to overcome quantization distortion, including Bussgang decomposition linearization methods, statistical inference techniques such as approximate message passing (AMP), model-driven deep learning frameworks, and quantization architectures endowed with noise-shaping capabilities. Finally, it discusses the challenges and future directions of this technology in emerging scenarios, including terahertz communications, intelligent reflecting surfaces, and integrated sensing and communication. This paper seeks to provide researchers with a systematic technical overview, clarifying the intrinsic connections and trade-offs among different methods, and offering valuable insights for the realization of high-energy-efficiency and low-cost Massive MIMO systems.
- New
- Research Article
- 10.1051/0004-6361/202453516
- Dec 23, 2025
- Astronomy & Astrophysics
- Juhan Raidal + 3 more
We investigated the statistical properties of the anisotropy in the gravitational wave (GW) background originating from supermassive black hole (SMBH) binaries. Considering scenarios that include environmental effects and eccentricities of the SMBH binaries, we derived the distribution of the GW anisotropy power spectrum coefficients, C_ l /C_0. Although the mean of C_ l /C_0 is the same for all multipoles, we show that their distributions vary, with the low l distributions being the widest. This study finds a strong correlation between spectral fluctuations and anisotropy in the GW signal and shows that the GW anisotropy can break the degeneracy between scenarios that include environmental effects or eccentric binaries. We find that existing NANOGrav constraints on GW anisotropy begin to constrain SMBH scenarios with strong environmental effects.
- New
- Research Article
- 10.1017/jfm.2025.10990
- Dec 22, 2025
- Journal of Fluid Mechanics
- Yuankai Cui + 3 more
Electrical effects are known to play an important role in particle-laden flows, yet a holistic view of how they modulate turbulence remains elusive due to the complexity of multifield coupling. Here, we present a total of 119 direct numerical simulations of particle-laden turbulent channel flow that reveal a striking ability of electrical effects to induce turbulence relaminarisation and markedly alter wall drag. As expected, the transition from turbulence to laminar flow is accompanied by abrupt changes in the statistical properties of both the fluid and particulate phases. Nevertheless, with increasing electrical effects, the wall-normal profiles of the mean streamwise fluid velocity and mean local particle mass loading exhibit opposite trends in the turbulent and laminar regimes, arising from the competition between turbophoresis and electrostatic drift. We identify three distinct flow regimes resulting from the electrical effects: a drag-reduced turbulent regime, a drag-reduced laminar regime, and a drag-enhanced laminar regime. It is revealed that relaminarization originates from the complete suppression of the streak breakdown in the near-wall self-sustaining cycle, followed by the sequential inhibition of other subprocesses in the cycle. In the turbulent regime, increasing electrical effects induce opposing trends in Reynolds and particle stress contributions to drag, yielding a non-monotonic drag response. In laminar regimes, by contrast, the drag coefficient increases monotonically as the Reynolds stress vanishes and particle-induced stress becomes dominant.
- New
- Research Article
- 10.1098/rsbm.2025.0013
- Dec 19, 2025
- Biographical Memoirs of Fellows of the Royal Society
- John A Peacock + 2 more
Abstract Nick Kaiser was a statistical cosmologist of rare creativity, who wrote many deeply influential papers concerning the study of large-scale inhomogeneities in the Universe. His most important achievements were: explaining the biased amplitude of galaxy clustering via the enhanced correlations of rare massive haloes of dark matter; diagnosing how the peculiar velocities associated with structure formation would generate anisotropic redshift-space distortions in galaxy clustering; and analysing the effect of weak gravitational lensing, in which small coherent distortions of the shape of galaxy images could be used to map the dark matter distribution and measure its statistical properties. These theoretical ideas are at the heart of new generations of large galaxy surveys, which aim to use Nick’s methods to probe fundamental aspects of the cosmological model, particularly measuring whether the vacuum density evolves with time, and testing whether Einstein’s relativistic theory of gravity is correct on cosmological scales.
- New
- Research Article
- 10.5194/bg-22-8093-2025
- Dec 19, 2025
- Biogeosciences
- Kévin Robache + 1 more
Abstract. High-frequency variability of the partial pressure of CO2 (pCO2) in coastal environments reflects the complex interplay of physical, chemical and biological drivers. Multiscale statistical approaches provide a robust framework for understanding dynamics across timescales and for reliably assessing coastal carbon processes. In this study, pCO2 has been measured on the Astan cardinal buoy (Brittany, west coast of France) with at 30 min intervals by Gac et al. (2020), yielding a dataset of 32 582 data points collected over a period of nearly five years. These measurements were then coupled with others of sea surface temperature and salinity, chlorophyll a, oxygen saturation and atmospheric pressure. The aim of this study was to consider the statistical properties of the thermal and non-thermal component of pCO2, based on its relation with temperature established by Takahashi et al. (2009). Using Fourier spectral analysis, it was demonstrated that all marine scalars exhibited scaling properties with power-law slopes ranging from 1.73–1.85 for timescales spanning from 12 h to at least 80–100 d. The results obtained from this analysis indicate a turbulent and intermittent dynamics for all the considered scalars, including sea surface temperature and salinity, chlorophyll a, oxygen saturation, pCO2, and pCO2 thermal and non-thermal components. A time-reversibility analysis evidenced the irreversibility of the pCO2 components above 30 d. The irreversibility exhibited by the thermal component was found to be higher than that of the non-thermal component, with an average value of the associated irreversibility index that was approximately 3.5 times higher than that of the non-thermal component over the period of 50–70 d. Furthermore, a methodology known as the Probability Density Function quotient was employed, a method that has not been widely utilized. This approach enabled the identification of values for which there were statistical relationships between variables. This facilitated the quantification of the influence of primary production on the non-thermal pCO2, or the influence of periods of depression on supersaturation due to atmospheric or terrigenous inputs. This provided new insights into the stochastic coupling between biological and physical processes, when considering high-frequency pCO2 variability.
- New
- Research Article
- 10.61173/5pr54n62
- Dec 19, 2025
- MedScien
- Kun Zhang
“Adaptation is an essential feature of auditory neurons, which reduces their responses to unchanging and recurring sounds and allows their response properties to be matched to the constantly changing statistics of sounds that reach the ears. As a consequence, processing in the auditory system highlights novel or unpredictable sounds and produces an efficient representation of the vast range of sounds that animals can perceive by continually adjusting the sensitivity and, to a lesser extent, the tuning properties of neurons to the most commonly encountered stimulus values. Together with attentional modulation, adaptation to sound statistics also helps to generate neural representations of sound that are tolerant to background noise and therefore plays a vital role in auditory scene analysis. In this review, we consider the diverse forms of adaptation that are found in the auditory system in terms of the processing levels at which they arise, the underlying neural mechanisms, and their impact on neural coding and perception. We also ask what the dynamics of adaptation, which can occur over multiple timescales, reveal about the statistical properties of the environment. Finally, we examine how adaptation to sound statistics is influenced by learning and experience and changes as a result of aging and hearing loss.”
- New
- Research Article
- 10.1051/0004-6361/202557085
- Dec 17, 2025
- Astronomy & Astrophysics
- M Candela Cerdosino + 18 more
We investigate if systems of multiple Lyman-alpha emitters (LAEs) can serve as a proxy for dark matter halo mass, assess how their radiative properties relate to the underlying halo conditions, and explore the physics of star formation activity in LAEs and its relation to possible physically related companions. We used data from the One-hundred-deg2$ DECam Imaging in Narrowbands (ODIN) survey, which targets LAEs in three narrow redshift slices. We identified physically associated LAE multiples in the COSMOS field at z = 2.4, z = 3.1, and $z=4.5, and we used a mock catalog from the IllustrisTNG100 simulation to assess the completeness and contamination affecting the resulting sample of LAE multiples. We then studied their statistical and radiative properties as a function of multiplicity, for which we adopted the term ``multiplicity" to refer to the number of physically associated LAEs. We find a strong correlation between LAE multiplicity and host halo mass in the mocks, with higher multiplicity systems preferentially occupying more massive halos. In both the ODIN and the mock sample, we find indications that the mean Lyα luminosity and UV magnitude of LAEs in multiples increase with multiplicity. The halo-wide LAE surface brightness densities in Lyα and UV increase with multiplicity, reflecting more compact and actively star-forming environments. The close agreement between the model and ODIN-COSMOS observations supports the validity of the Lyα emission model in capturing key physical processes in LAE environments. Finally, a subhalo-based perturbation-induced star formation model reproduces the minimum subhalo mass distribution in simulations at z=2.4, suggesting that local perturbations—rather than the presence of LAE companions—drive star formation activity in these systems. For the higher redshift samples, neighbor perturbations do not seem to be the main driver that triggers star formation.
- New
- Research Article
- 10.1038/s41467-025-67499-6
- Dec 15, 2025
- Nature communications
- Malcolm Hillebrand + 1 more
Active fluids exhibit chaotic flows at low Reynolds number known as active turbulence. Whereas the statistical properties of the chaotic flows are increasingly well understood, the nature of the transition from laminar to turbulent flows as activity increases remains unclear. Here, through simulations of a minimal model of unbounded and defect-free active nematics, we find that the transition to active turbulence is discontinuous. We show that the transition features a jump in the mean-squared velocity, as well as bistability and hysteresis between laminar and chaotic flows. From distributions of finite-time Lyapunov exponents, we identify the transition at a value A*≈4900 of the dimensionless activity number. Below the transition to chaos, we find subcritical bifurcations that feature bistability of different laminar patterns. These bifurcations give rise to oscillations and to chaotic transients, which become very long close to the transition to turbulence. Overall, our findings contrast with the continuous transition to turbulence in channel confinement, where turbulent puffs emerge within a laminar background. We propose that, without confinement, the long-range hydrodynamic interactions of Stokes flow suppress the spatial coexistence of different flow states, and thus render the transition discontinuous.
- Research Article
- 10.1103/mvz3-46vy
- Dec 9, 2025
- Physical Review C
- K Fujio + 3 more
Statistical properties of neutron-induced reaction cross sections using a random-matrix approach
- Research Article
- 10.1111/bmsp.70021
- Dec 8, 2025
- The British journal of mathematical and statistical psychology
- Sophie Vanbelle
Reliability is crucial in psychometrics, reflecting the extent to which a measurement instrument can discriminate between individuals or items. While classical test theory and intraclass correlation coefficients are well-established for quantitative scales, estimating reliability for binary outcomes presents unique challenges due to their discrete nature. This paper reviews and links three major approaches to estimate reliability for single ratings on binary scales: the normal approximation approach, kappa coefficients, and the latent variable approach, which enables estimation at both latent and manifest scale levels. We clarify their conceptual relationships, show conditions for asymptotical equivalence, and evaluate their performance across two common study designs, repeatability and reproducibility studies. Then, we extend the Bayesian Dirichlet-multinomial method for estimating kappa coefficients to settings with more than two replicates, without requiring Bayesian software. Additionally, we introduce a Bayesian method to estimate manifest scale reliability from latent scale reliability that can be implemented in standard Bayesian software. A simulation study compares the statistical properties of the three major approaches across Bayesian and frequentist frameworks. Overall, the normal approximation approach performed poorly, and the frequentist approach was unreliable due to singularity issues. The findings offer further refined practical recommendations.