Published in last 50 years
Articles published on Measurement Uncertainty
- New
- Research Article
- 10.1515/cclm-2025-1052
- Nov 10, 2025
- Clinical chemistry and laboratory medicine
- Tobias Schierscher + 9 more
A candidate reference measurement procedure (RMP) based on isotope dilution (ID) liquid chromatography-tandem mass spectrometry (LC-MS/MS) was developed and validated to accurately measure serum and plasma concentrations of mycophenolic acid glucuronide (MPAG). Quantitative nuclear magnetic resonance (qNMR) spectroscopy was utilized for determining the absolute content (mass fraction; g/g) of the reference material, thereby, establishing the traceability to SI units. Separation of MPAG from potential interferences, whether known or unknown, was accomplished by using a Phenomenex Luna C18(2) column. For sample preparation, a protocol based on protein precipitation followed by a high-dilution step was established. A multi-day validation experiment evaluated precision and accuracy. Reproducibility was determined by comparing the results of the procedure between two independent laboratories. Measurement uncertainty (MU) was assessed in accordance with current guidelines. The RMP demonstrated high selectivity and specificity enabling the quantification of MPAG in the range between 0.750 and 600 μg/mL. The intermediate precision and repeatability (n=60, measurements) were found to be in the range from 0.9 to 3.7 % for serum samples and from 1.2 to 4.6 % for plasma samples. The repeatability was less than 3.5 % for serum samples and less than 4.0 % for plasma samples. The relative mean bias ranged from-0.9 to 3.2 % for serum samples and from-0.3 to 2.9 % for plasma samples. The expanded measurement uncertainties (k=2) for single measurements ranged between 2.4 and 7.7 % and were further reduced performing a target value assignment (n=6) resulting in expanded measurement uncertainties between 1.8 and 3.3 % (k=2), respectively. We herein present a new LC-MS/MS-based candidate RMP for MPAG in human serum and plasma which offers a traceable and reliable platform for the standardization of routine assays and evaluation of clinically relevant samples.
- New
- Research Article
- 10.1139/cjce-2025-0075
- Nov 10, 2025
- Canadian Journal of Civil Engineering
- Ahmad Altarabsheh + 4 more
Accurate prediction of runway pavement conditions is critical for aviation safety and maintenance planning. This study presents a Hierarchical Bayesian Joint Model with latent variables to simultaneously forecast key performance metrics—the International Roughness Index (IRI) and Surface Macrotexture Depth (SMTD)—while explicitly accounting for measurement uncertainties. The proposed model incorporates nonlinear quadratic relationships among axial loads, SMTD, and IRI, effectively capturing both direct and indirect load effects. Model performance was rigorously evaluated through a stratified five-fold cross-validation, achieving mean absolute errors as low as 0.98 for IRI and 0.88 for SMTD, outperforming traditional methods by approximately 15%. Posterior diagnostics confirmed robust convergence and accurate uncertainty quantification. Overall, the hierarchical Bayesian model demonstrated superior predictive accuracy, highlighting its practical utility for data-driven pavement management decisions.
- New
- Research Article
- 10.1080/10291954.2025.2575655
- Nov 8, 2025
- South African Journal of Accounting Research
- Danielle Van Wyk + 2 more
Purpose This study examines the ambiguity of financial reporting judgement disclosures of South African listed companies (including whether these disclosures are boilerplate). Motivation Applying International Financial Reporting Standards (IFRSs) requires judgement, which may increase measurement uncertainty and decrease the decision-usefulness of financial information. However, transparent disclosures regarding financial reporting judgements could mitigate these risks. Methodology Quantitative content analysis, using a self-developed disclosure checklist and Likert scales, was performed for 104 companies during 2020–2023 to measure the ambiguity of financial reporting judgement disclosures when applying selected IFRSs. Main findings Companies generally displayed low ambiguity when disclosing financial reporting judgements associated with financial instruments, group-related accounting, leases, depreciation, goodwill and investment property. Average to high disclosure ambiguity was detected regarding fair value measurement, revenue recognition and provisions. Significant differences in ambiguity were identified based on the year, company size and industry. Managerial impact It is recommended that management and audit committees annually reassess their disclosure practices relating to financial reporting judgements to augment the decision-usefulness of annual report disclosures and reduce boilerplate disclosures. Novelty The study focuses not on mere disclosure compliance but on the ambiguity of disclosures relating to financial reporting judgements in a period encompassing a global pandemic, to assess the usefulness of disclosures for decision-making.
- New
- Research Article
- 10.1038/s41598-025-26084-z
- Nov 7, 2025
- Scientific reports
- Mengying Du + 7 more
Electronic noses (e-noses) offer a practical solution for real-time monitoring of ammonia (NH3) in agricultural environments, where NH3 often coexists with interfering gases such as CO2, CH4, and H2S. However, semiconductor-based gas sensors commonly used in e-nose systems suffer from inherent cross-sensitivity, which reduces measurement accuracy. This study investigates the cross-sensitivity of NH3 detection and introduces a mitigation strategy through convolutional neural networks (CNNs) for sensor data fusion. Experimental results show that WO2-based sensors exhibit strong NH3 selectivity, with response ratios of 7.3:1 against CH4 and 17.8:1 against H2S. Density functional theory (DFT) analysis confirmed that the WO3 sensor exhibited strongest NH3 binding energy (- 1.45eV), compared to SnO2 (- 1.10eV), explaining the observed selectivity. Measurement uncertainties (± 8%) were quantified under varying humidity (30-90% RH) and temperature (10-40°C) using a weighted least squares error propagation model. A quasi-2D sensor array improved NH3 classification accuracy to 96.4% (7.2% increase) while reducing concentration errors by 50.8%, as validated by linear discriminant analysis. Long-term stability tests demonstrated that SnO2 sensors maintained a low baseline drift of 0.18%/day over 180days, outperforming CH4 (0.31%/day) and ZnO (0.42%/day) sensors. Furthermore, the CNN model, trained on multi-sensor time-series data, achieved 91.7% accuracy in mixed-gas environments by capturing non-linear response patterns, ensuring reliable NH3 quantification despite interferents. These findings highlight the promise of CNN-enhanced e-nose systems for precise NH3 monitoring in complex agricultural settings, addressing key challenges of cross-sensitivity and environmental stability.
- New
- Research Article
- 10.1109/tuffc.2025.3625770
- Nov 7, 2025
- IEEE transactions on ultrasonics, ferroelectrics, and frequency control
- Daniel Sarno + 2 more
False-positive indications in breast cancer screening cause pain and anxiety for patients and are a time and cost waste to healthcare systems. New quantitative ultrasound scanners aim to measure intrinsic acoustic properties of soft tissues to aid better clinical decision making. This study details the performance characterisation of a novel phase-insensitive ultrasound computed tomography (Q-UCT) scanner, developed at the UK's National Physical Laboratory, for quantitative acoustic attenuation coefficient mapping of the breast. Scans of multiple commercially sourced anthropomorphic breast phantoms were acquired, with the results being compared to X-ray computed tomography imagery and ground truth attenuation coefficients obtained from measurements of the constituent phantom materials. The novel system demonstrated the ability to detect the presence of inserts as small as 4 mm in diameter and measure the intrinsic attenuation of larger inserts and host materials with attenuation coefficients ranging from 0.7 dB cm-1 to 4.1 dB cm-1 at 3.2 MHz. For the host materials, agreement with the ground truth values of attenuation lies within the expanded measurement uncertainties of the ground-truth values.
- New
- Research Article
- 10.3390/sym17111892
- Nov 6, 2025
- Symmetry
- Irina Georgescu + 1 more
This paper proposes a fuzzy copula-based optimization framework for modeling dependence structures and financial risk under parameter uncertainty. The parameters of selected copula families are represented as trapezoidal fuzzy numbers, and their α-cut intervals capture both the support and core ranges of plausible dependence values. This fuzzification transforms the estimation of copula parameters into a fuzzy optimization problem, enhancing robustness against sampling variability. The methodology is empirically applied to gold and oil futures (1 January 2015–1 January 2025), comparing symmetric copulas, i.e., Gaussian and Frank and asymmetric copulas, i.e., Clayton, Gumbel and Student-t. The results prove that the fuzzy copula framework provides richer insights than classical point estimation by explicitly expressing uncertainty in dependence measures (Kendall’s τ, Spearman’s ρ) and risk indicators (Value-at-Risk, Conditional Value-at-Risk). Rolling-window analyses reveal that fuzzy VaR and fuzzy CVaR effectively capture temporal dependence shifts and tail severity, with fuzzy CVaR consistently producing more conservative risk estimates. This study highlights the potential of fuzzy optimization and fuzzy dependence modeling as powerful tools for quantifying uncertainty and managing extreme co-movements in financial markets.
- New
- Research Article
- 10.1088/1361-6420/ae16cd
- Nov 6, 2025
- Inverse Problems
- Hamza Ammar + 2 more
Abstract In this paper, we would like to elucidate the question of the Carleman estimate constants dependency on the size of the observation boundary. This allows to quantify the effect of measurements uncertainty with respect to the size of the observation boundary in many inverse and control problems. We take an example of an inverse source problem for a parabolic equation and explicitly calculate Lipschitz stability constants that appears in the L 2 -estimate of the source function. We deliberately construct a class of space dependent weight functions that depend on the measurement boundary size. Then we identify the optimal weight function that allows to minimize the stability constant. Using our approach, we found that when the measurement boundary covers 80% or more of the domain boundary, we can explicitly provide the formula of the optimal constant. When the measurement boundary is less than 80%, we are not able to find the explicit formula of the optimal expression, but we are able to numerically approximate the optimal constant. This paper presents meticulous calculations that require a rigorous and careful reading to fully comprehend the technical intricacies of the analysis.
- New
- Research Article
- 10.1364/oe.578603
- Nov 6, 2025
- Optics Express
- Hao Ye + 6 more
Motion blur hampers optical navigation to non-cooperative spacecraft. We present an uncertainty-aware stereo imaging approach that restores blur and recovers relative pose with calibrated confidence. The method couples physics-guided deblurring with edge-preserving regularization, adaptive detection of elliptical structures on manufactured surfaces, and stereo geometric refinement that propagates measurement uncertainty to the final estimates. Comprehensive laboratory experiments on a space-like optical bench, supported by matched synthetic trials spanning mild, moderate, and severe blur, show lower position and orientation errors and higher success rates than five representative baselines, with the largest gains under severe blur, while keeping runtime practical. Sensitivity studies and coverage–confidence analyses confirm well-behaved uncertainty for risk-aware guidance and docking.
- New
- Research Article
- 10.3390/metrology5040067
- Nov 5, 2025
- Metrology
- Atilla Barna Vandra + 1 more
This study presents a novel error model that distinguishes between constant and variable components of systematic error (bias) in measurement systems, particularly within clinical laboratory settings. Traditional approaches often conflict with these components, resulting in miscalculations of total error and measurement uncertainty. Through mathematical deduction and computer simulations, the authors demonstrate that the standard deviation derived from long-term quality control (QC) data includes both random error and the variable bias component, challenging its use as a sole estimator of random error. The proposed model defines the constant component of systematic error (CCSE) as a correctable term, while the variable component (VCSE(t)) behaves as a time-dependent function that cannot be efficiently corrected. The study further reveals that long-term QC data are not normally distributed, contradicting prevailing assumptions in metrology. It advocates for revised definitions in the International Vocabulary of Metrology (VIM3), emphasizing the need to distinguish between bias types determined under different measurement conditions. By applying this refined model, laboratories can enhance decision-making accuracy and more accurately estimate measurement error and uncertainty. The findings have implications beyond clinical laboratories, suggesting a paradigm shift in how systematic error is conceptualized and managed across all domains of metrology.
- New
- Research Article
- 10.18502/ijml.v11i4.20092
- Nov 5, 2025
- International Journal of Medical Laboratory
- Sabrina Belmahi + 5 more
Introduction: Creatinine is a key parameter for evaluating renal function. This study aims to assess the repeatability, reproducibility, measurement uncertainty, and method comparison for creatinine measurement by the Jaffé kinetic method on the Architect ci-2800, relying on three levels of controls (low, medium, high). Materials and Methods: Thirty fresh serum 7.17–74, 6 mg/L range measurements per level were carried out to determine repeatability, and a monitoring period of several days made it possible to evaluate reproducibility. Measurement uncertainty was estimated following the internal quality control + external evaluation of quality. The method comparison in order to estimate bias and the correlation coefficient was assessed by Bland–Altman analysis.. The results were interpreted according to limits set by the reference values (SFBC and RICOS), as well as the manufacturer. Results: Repeatability coefficient of variations (CVs) were 1.13 % (low), 1.05 % (medium), and 0.50 % (high), all below RICOS targets. Intermediate CVs over 30 days were 2.91 %, 2.02 % and 1.75 %. Expanded uncertainty (k = 2) ranged from 6.4 % to 7.2 % (RICOS ≤ 8.2 %). Regression gave y = 1.005 x – 0.365, r = 0.999, with a mean bias of 2.15 %. Conclusion: Creatinine measurement by the Jaffé kinetic method on the Architect ci-2800 shows excellent performance in terms of precision (repeatability, reproducibility, measurement uncertainty) and inter-instrument correlation. Quality criteria are generally satisfied, ensuring reliable use for routine clinical monitoring of renal function.
- New
- Research Article
- 10.1088/1681-7575/ae1bae
- Nov 5, 2025
- Metrologia
- Samuel Bilson + 3 more
Abstract Machine learning (ML) classification models are increasingly being used in a wide range of applications where it is important that predictions are accompanied by uncertainties, including in climate and earth observation, medical diagnosis and bioaerosol monitoring. The output of an ML classification model is a type of categorical variable known as a nominal property in the International Vocabulary of Metrology (VIM). However, concepts related to uncertainty evaluation for nominal properties are not defined in the VIM, nor is such evaluation addressed by the Guide to the Expression of Uncertainty in Measurement (GUM). In this paper we propose a metrological conceptual uncertainty evaluation framework for nominal properties. This framework is based on probability mass functions and summary statistics thereof, and it is applicable to ML classification. We also illustrate its use in the context of two applications that exemplify the issues and have significant societal impact, namely, climate and earth observation and medical diagnosis. Our framework would enable an extension of the GUM to uncertainty for nominal properties, which would make both applicable to ML classification models.
- New
- Research Article
- 10.1017/s0269964825100132
- Nov 5, 2025
- Probability in the Engineering and Informational Sciences
- Radhakumari Maya + 3 more
Abstract Measure of uncertainty in past lifetime distribution plays an important role in the context of information theory, forensic science and other related fields. In the present work, we propose non-parametric kernel type estimator for generalized past entropy function, which was introduced by Gupta and Nanda [9], under $\alpha$ -mixing sample. The resulting estimator is shown to be weak and strong consistent and asymptotically normally distributed under certain regularity conditions. The performance of the estimator is validated through simulation study and a real data set.
- New
- Research Article
- 10.1515/cclm-2025-1053
- Nov 4, 2025
- Clinical chemistry and laboratory medicine
- Tobias Schierscher + 10 more
An isotope dilution liquid chromatography-tandem mass spectrometry (ID-LC-MS/MS)-based candidate reference measurement procedure (cRMP) was developed and validated to measure serum and plasma concentrations of the total and free form of valproicacid. Quantitative nuclear magnetic resonance spectroscopic methodology was used to determine the absolute content (g/g) of the reference material, ensuring traceability to SI units. Separation of valproic acid from potential unknown interferences was achieved with reversed-phase chromatography. A protein precipitation protocol was established for sample preparation for total valproic acid, while the free form was separated by ultrafiltration. Assay validations and measurement uncertainties were aligned with guidelines from the Clinical and Laboratory Standards Institute, the International Conference on Harmonization, and the Guide to the Expression of Uncertainty in Measurement. The cRMPs were highly selective and specific with no evidence of matrix effects, allowing quantifying total andfree valproic acid in a range of 2.40-145 μg/mL and 1.60-42.0 μg/mL, respectively. Intermediate precision was <4.0 % and repeatability CV ranged from 0.9 to 3.5% for all concentrations of free and total valproic acid. The relative mean bias ranged from-0.4 to 4.1 % for native serum and from-0.3 to 3.5 % for Li-heparin plasma levels for total valproic acid. Free valproic acid showed mean biases between-2.9 and 4.0 % for native serum and ultrafiltrates. Measurement uncertainties for single measurements and target value assignment ranged from 1.7 to 3.4 % and 0.9-1.3 %, respectively, for total valproic acid. Free valproic acid ranged from 2.0 to 4.1 % and from 0.8 to 1.5 % for single measurements and target value assignment, respectively. We present novel ID-LC-MS/MS-based cRMPs for total and free valproic acid in human serum and plasma which provides a traceable and reliable platform for the standardization of routine assays and evaluation of clinically relevant samples.
- New
- Research Article
- 10.1051/0004-6361/202555898
- Nov 4, 2025
- Astronomy & Astrophysics
- Davide Tornotti + 5 more
We present a hierarchical Bayesian framework designed to infer the luminosity function of any class of object by jointly modelling data from multiple surveys with varying depth, completeness, and sky coverage. Our method explicitly accounts for selection effects and measurement uncertainties (e.g. in luminosity) and can be generalized to any extensive quantity, such as mass. We validated the model using mock catalogues; from this we determined that deep data reaching ≳ 1.5 dex below a characteristic luminosity (tilde L ^⋆) are essential to reducing biases at the faint end (łesssim 0.1 dex) and that wide-area data help constrain the bright end. As a proof of concept, we considered a combined sample of 1176 Lyman α emitters at redshift $3 < z < 5$ drawn from several MUSE surveys, ranging from ultra-deep (≳ 90 hr) and narrow (łesssim 1 arcmin^2) fields to shallow (łesssim 5 hr) and wide (≳ 20 arcmin^2) fields. With this complete sample, we constrain the luminosity function parameters łog(Φ^⋆/ Mpc^ łog(L^⋆/ and α = -1.81^ where the uncertainties represent the $90%$ credible intervals. These values are in agreement with the results of studies based on gravitational lensing that reach łog(L/ ) ≈ 41, although differences in the faint-end slope underscore how systematic errors are starting to dominate. In contrast, wide-area surveys represent the natural extension needed to constrain the brightest Lyman α emitters łog(L/ ) ≳ 43 where statistical uncertainties still dominate.
- New
- Research Article
- 10.1177/14759217251381259
- Nov 3, 2025
- Structural Health Monitoring
- Guoqing Li + 3 more
This study proposes a novel structural damage detection method that integrates Mel-frequency cepstral coefficients (MFCCs) and deep autoencoder (DAE) networks to enhance robustness against measurement noise and uncertainties. MFCCs are extracted from power spectrum ratios derived from vibration signals to serve as noise-resilient features representing the dynamic characteristics of structures. A DAE is trained using healthy-state MFCCs to learn their intrinsic patterns, and reconstruction errors on testing data are subsequently analyzed. To account for uncertainties, multiple measurements are performed, and the resulting mean absolute error (MAE) distributions are modeled using Gaussian processes. The Bhattacharyya distance is then employed to quantify the differences between MAE distributions under healthy and potentially damaged states, leading to the definition of a damage indicator. Two case studies, including laboratory-controlled experiments on simply supported beams and field testing on a steel bridge, are conducted to validate the method. The results demonstrate that the proposed approach effectively identifies structural damage and exhibits strong resilience to varying noise levels, outperforming conventional MFCC-based techniques. This method shows significant potential for practical applications in structural health monitoring under uncertain environments.
- New
- Research Article
- 10.1785/0220250263
- Nov 3, 2025
- Seismological Research Letters
- Doug Bloomquist + 2 more
Abstract Calibration of seismometers used in networks such as the International Monitoring System (IMS) is important for ensuring confidence in the measurements of ground motion and the resulting analysis that is performed on the waveform data. In this study, six models of seismometers widely used at IMS stations were calibrated using high-precision vertical and horizontal shake tables. The calibration procedure followed the ISO 16063-11 standard for primary vibration calibration by laser interferometry. By following this standard and using shake tables with up-to-date calibrations that are traceable to the International System of Units (SI), the obtained calibrations are themselves fully traceable to the SI, something that many traditional seismometer calibration procedures lack. The sensors were evaluated from 0.1 to 50 Hz, and the calibration results are directly compared to the sensors’ nominal response models provided by the manufacturers. The calibrations were repeated multiple times to allow us to quantify the uncertainty associated with measurement repeatability. We combined these uncertainty estimates with the uncertainties associated with the shake tables and laser vibrometer to obtain an overall estimate for the measurement uncertainty. We find that the calibrations are highly repeatable and the uncertainties that arise from repeated measurements are orders of magnitude smaller than the uncertainties specified by the manufacturer of the shake tables. Electrical calibrations were then performed on the same sensors, which mimics how fielded IMS seismometers are calibrated. The sensors’ response to ground motion, measured with the shake table, was removed from the electrical calibration data so that the performance of the electrical calibration system itself could be determined. This measured electrical calibration system response is then directly compared to manufacturers’ specifications. Results from this study offer insight into the higher-frequency performance of seismometers used in the IMS and demonstrate how primary ground-motion calibrations can be used to verify sensor performance and identify issues that electrical calibration methods may not detect.
- New
- Research Article
- 10.52152/d11459
- Nov 1, 2025
- DYNA
- Maricarmen Manjabacas + 3 more
The Monte Carlo method (MCM), applied to uncertainty calculation in metrology, is well-suited for nonlinear functions. It also provides more accurate solutions in certain complex linear models. Although MCM is extensively covered in the ISO Guide to the Expression of Uncertainty in Measurement (GUM) and in metrological guides from reference institutions in various countries, industrial engineering curricula typically only introduce concepts related to uncertainty propagation using the traditional analytical method. MCM requires programming to generate random numbers within the expected range for each variable involved in the metrological system, combining computing tools with metrology. This methodological proposal is based on a basic system for constructing an angle using a sine bar and gauge blocks. The problem progressively incorporates the temperature variable under different behavioural hypotheses and the influence of other factors, such as the roundness tolerance of the sine bar supports. The results obtained using MCM are compared with those from the classical GUM method, and the analysis demonstrates the robustness of MCM from a scientific perspective. Based on this methodology, further challenges could be explored, such as introducing the flatness tolerance of the surface plate used. Keywords: Montecarlo, dimensional metrology, angle construction, influence magnitudes
- New
- Research Article
- 10.1016/j.jenvrad.2025.107815
- Nov 1, 2025
- Journal of environmental radioactivity
- Christopher Martin + 2 more
The appropriate environmental sample to educate novice students in environmental radioactivity measurements using gamma ray spectroscopy.
- New
- Research Article
- 10.3390/mca30060120
- Nov 1, 2025
- Mathematical and Computational Applications
- Asmaa S Al-Moisheer + 2 more
The given paper proposes a new statistical framework based on the combination of the Fav-Jerry distribution (FJD) and a joint type-II censoring scheme (JT-II-CS) to examine heterogeneous and censored data. The FJD offers tractability in analysis by using its closed form of the quantile function, whereas with missing or incomplete data, the JT-II-CS offers multi-sample comparisons. Bayesian estimation is based on Markov chain Monte Carlo procedures, while the maximum likelihood estimation is obtained via a Newton–Raphson method. Both estimation strategies provide estimates of the parameters along with corresponding measures of uncertainty. The proposed methodology is also used on coded survey data on the knowledge of autism in both Hong Kong and Canada, which illustrates its potential in the measurement of cultural variance. In addition to this use, the framework highlights the potential for integrating more complex distributional modeling with censoring methods for general applications in engineering, natural sciences, and social sciences.
- New
- Research Article
- 10.1016/j.jpba.2025.116994
- Nov 1, 2025
- Journal of pharmaceutical and biomedical analysis
- Monique Silva Dos Santos + 5 more
Analytical quality by design and measurement uncertainty in the development of discriminative dissolution method for pediatric fixed-dose combination dispersible tablets of isoniazid and rifampicin.