Bootstrap methods for quantifying the uncertainty of binding constants in the hard modeling of spectrophotometric titration data

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Bootstrap methods for quantifying the uncertainty of binding constants in the hard modeling of spectrophotometric titration data

Similar Papers
  • Research Article
  • Cite Count Icon 25
  • 10.1097/md.0000000000032953
The Charlson comorbidity index and short-term readmission in patients with heart failure: A retrospective cohort study.
  • Feb 10, 2023
  • Medicine
  • Dongmei Wei + 4 more

The relationship between the Charlson comorbidity index (CCI) and short-term readmission is as yet unknown. Therefore, we aimed to investigate whether the CCI was independently related to short-term readmission in patients with heart failure (HF) after adjusting for other covariates. From December 2016 to June 2019, 2008 patients who underwent HF were enrolled in the study to determine the relationship between CCI and short-term readmission. Patients with HF were divided into 2 categories based on the predefined CCI (low < 3 and high > =3). The relationships between CCI and short-term readmission were analyzed in multivariable logistic regression models and a 2-piece linear regression model. In the high CCI group, the risk of short-term readmission was higher than that in the low CCI group. A curvilinear association was found between CCI and short-term readmission, with a saturation effect predicted at 2.97. In patients with HF who had CCI scores above 2.97, the risk of short-term readmission increased significantly (OR, 2.66; 95% confidence interval, 1.566-4.537). A high CCI was associated with increased short-term readmission in patients with HF, indicating that the CCI could be useful in estimating the readmission rate and has significant predictive value for clinical outcomes in patients with HF.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 10
  • 10.3390/app112311376
Flutter Derivatives Identification and Uncertainty Quantification for Bridge Decks Based on the Artificial Bee Colony Algorithm and Bootstrap Technique
  • Dec 1, 2021
  • Applied Sciences
  • Zhouquan Feng + 1 more

This paper presents a novel parameter identification and uncertainty quantification method for flutter derivatives estimation of bridge decks. The proposed approach is based on free-decay vibration records of a sectional model in wind tunnel tests, which consists of parameter identification by a heuristic optimization algorithm in the sense of weighted least squares and uncertainty quantification by a bootstrap technique. The novel contributions of the method are on three fronts. Firstly, weighting factors associated with vertical and torsional motion in the objective function are determined more reasonably using an iterative procedure rather than preassigned. Secondly, flutter derivatives are identified using a hybrid heuristic and classical optimization method, which integrates a modified artificial bee colony algorithm with the Powell’s algorithm. Thirdly, a statistical bootstrap technique is used to quantify the uncertainties of flutter derivatives. The advantages of the proposed method with respect to other methods are faster and more accurate achievement of the global optimum, and refined uncertainty quantification in the identified flutter derivatives. The effectiveness and reliability of the proposed method are validated through noisy data of a numerically simulated thin plate and experimental data of a bridge deck sectional model.

  • Research Article
  • Cite Count Icon 88
  • 10.1029/2011wr011289
Analysis of regression confidence intervals and Bayesian credible intervals for uncertainty quantification
  • Sep 1, 2012
  • Water Resources Research
  • Dan Lu + 2 more

Confidence intervals based on classical regression theories augmented to include prior information and credible intervals based on Bayesian theories are conceptually different ways to quantify parametric and predictive uncertainties. Because both confidence and credible intervals are used in environmental modeling, we seek to understand their differences and similarities. This is of interest in part because calculating confidence intervals typically requires tens to thousands of model runs, while Bayesian credible intervals typically require tens of thousands to millions of model runs. Given multi‐Gaussian distributed observation errors, our theoretical analysis shows that, for linear or linearized‐nonlinear models, confidence and credible intervals are always numerically identical when consistent prior information is used. For nonlinear models, nonlinear confidence and credible intervals can be numerically identical if parameter confidence regions defined using the approximate likelihood method and parameter credible regions estimated using Markov chain Monte Carlo realizations are numerically identical and predictions are a smooth, monotonic function of the parameters. Both occur if intrinsic model nonlinearity is small. While the conditions of Gaussian errors and small intrinsic model nonlinearity are violated by many environmental models, heuristic tests using analytical and numerical models suggest that linear and nonlinear confidence intervals can be useful approximations of uncertainty even under significantly nonideal conditions. In the context of epistemic model error for a complex synthetic nonlinear groundwater problem, the linear and nonlinear confidence and credible intervals for individual models performed similarly enough to indicate that the computationally frugal confidence intervals can be useful in many circumstances. Experiences with these groundwater models are expected to be broadly applicable to many environmental models. We suggest that for environmental problems with lengthy execution times that make credible intervals inconvenient or prohibitive, confidence intervals can provide important insight. During model development when frequent calculation of uncertainty intervals is important to understanding the consequences of various model construction alternatives and data collection strategies, strategic use of both confidence and credible intervals can be critical.

  • Research Article
  • Cite Count Icon 9
  • 10.5075/epfl-thesis-7118
Uncertainty quantification in unfolding elementary particle spectra at the Large Hadron Collider
  • Jan 1, 2016
  • Infoscience (Ecole Polytechnique Fédérale de Lausanne)
  • Mikael Kuusela

Uncertainty quantification in unfolding elementary particle spectra at the Large Hadron Collider

  • Book Chapter
  • Cite Count Icon 3
  • 10.1007/978-3-030-77256-7_16
Surrogate Model-Based Uncertainty Quantification for a Helical Gear Pair
  • Jan 1, 2021
  • Thomas Diestmann + 3 more

Competitive industrial transmission systems must perform most efficiently with reference to complex requirements and conflicting key performance indicators. This design challenge translates into a high-dimensional multi-objective optimization problem that requires complex algorithms and evaluation of computationally expensive simulations to predict physical system behavior and design robustness. Crucial for the design decision-making process is the characterization, ranking, and quantification of relevant sources of uncertainties. However, due to the strict time limits of product development loops, the overall computational burden of uncertainty quantification (UQ) may even drive state-of-the-art parallel computing resources to their limits. Efficient machine learning (ML) tools and techniques emphasizing high-fidelity simulation data-driven training will play a fundamental role in enabling UQ in the early-stage development phase.This investigation surveys UQ methods with a focus on noise, vibration, and harshness (NVH) characteristics of transmission systems. Quasi-static 3D contact dynamic simulations are performed to evaluate the static transmission error (TE) of meshing gear pairs under different loading and boundary conditions. TE indicates NVH excitation and is typically used as an objective function in the early-stage design process. The limited system size allows large-scale design of experiments (DoE) and enables numerical studies of various UQ sampling and modeling techniques where the design parameters are treated as random variables associated with tolerances from manufacturing and assembly processes. The model accuracy of generalized polynomial chaos expansion (gPC) and Gaussian process regression (GPR) is evaluated and compared. The results of the methods are discussed to conclude efficient and scalable solution procedures for robust design optimization.

  • Research Article
  • 10.1093/annonc/mdu357.2
A New Prognostic Index for Overall Survival in Malignant Pleural Mesothelioma
  • Sep 1, 2014
  • Annals of Oncology
  • Y Kataoka + 7 more

A New Prognostic Index for Overall Survival in Malignant Pleural Mesothelioma

  • Research Article
  • Cite Count Icon 16
  • 10.1080/00949655.2014.932791
Comparing confidence intervals for Goodman and Kruskal's gamma coefficient
  • Jun 30, 2014
  • Journal of Statistical Computation and Simulation
  • L Andries Van Der Ark + 1 more

This study was motivated by the question which type of confidence interval (CI) one should use to summarize sample variance of Goodman and Kruskal's coefficient gamma. In a Monte-Carlo study, we investigated the coverage and computation time of the Goodman–Kruskal CI, the Cliff-consistent CI, the profile likelihood CI, and the score CI for Goodman and Kruskal's gamma, under several conditions. The choice for Goodman and Kruskal's gamma was based on results of Woods [Consistent small-sample variances for six gamma-family measures of ordinal association. Multivar Behav Res. 2009;44:525–551], who found relatively poor coverage for gamma for very small samples compared to other ordinal association measures. The profile likelihood CI and the score CI had the best coverage, close to the nominal value, but those CIs could often not be computed for sparse tables. The coverage of the Goodman–Kruskal CI and the Cliff-consistent CI was often poor. Computation time was fast to reasonably fast for all types of CI.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 12
  • 10.3390/su13116417
Bootstrapped Ensemble of Artificial Neural Networks Technique for Quantifying Uncertainty in Prediction of Wind Energy Production
  • Jun 4, 2021
  • Sustainability
  • Sameer Al-Dahidi + 3 more

The accurate prediction of wind energy production is crucial for an affordable and reliable power supply to consumers. Prediction models are used as decision-aid tools for electric grid operators to dynamically balance the energy production provided by a pool of diverse sources in the energy mix. However, different sources of uncertainty affect the predictions, providing the decision-makers with non-accurate and possibly misleading information for grid operation. In this regard, this work aims to quantify the possible sources of uncertainty that affect the predictions of wind energy production provided by an ensemble of Artificial Neural Network (ANN) models. The proposed Bootstrap (BS) technique for uncertainty quantification relies on estimating Prediction Intervals (PIs) for a predefined confidence level. The capability of the proposed BS technique is verified, considering a 34 MW wind plant located in Italy. The obtained results show that the BS technique provides a more satisfactory quantification of the uncertainty of wind energy predictions than that of a technique adopted by the wind plant owner and the Mean-Variance Estimation (MVE) technique of literature. The PIs obtained by the BS technique are also analyzed in terms of different weather conditions experienced by the wind plant and time horizons of prediction.

  • Dissertation
  • 10.25148/etd.fidc001904
A Comparison of Some Confidence Intervals for Estimating the Kurtosis Parameter
  • Jan 31, 2018
  • Guensley Jerome

Several methods have been proposed to estimate the kurtosis of a distribution. The three common estimators are: g2, G2 and b2. This thesis addressed the performance of these estimators by comparing them under the same simulation environments and conditions. The performance of these estimators are compared through confidence intervals by determining the average width and probabilities of capturing the kurtosis parameter of a distribution. We considered and compared classical and non-parametric methods in constructing these intervals. Classical method assumes normality to construct the confidence intervals while the non-parametric methods rely on bootstrap techniques. The bootstrap techniques used are: Bias-Corrected Standard Bootstrap, Efron’s Percentile Bootstrap, Hall’s Percentile Bootstrap and Bias-Corrected Percentile Bootstrap. We have found significant differences in the performance of classical and bootstrap estimators. We observed that the parametric method works well in terms of coverage probability when data come from a normal distribution, while the bootstrap intervals struggled in constantly reaching a 95% confidence level. When sample data are from a distribution with negative kurtosis, both parametric and bootstrap confidence intervals performed well, although we noticed that bootstrap methods tend to have smaller intervals. When it comes to positive kurtosis, bootstrap methods perform slightly better than classical methods in coverage probability. Among the three kurtosis estimators, G2 performed better. Among bootstrap techniques, Efron’s Percentile intervals had the best coverage.

  • Research Article
  • 10.1134/s0006350917030137
Analysis of calorimetric data for the binding of monomeric bis-benzimidazole, an analog of the Hoechst 33258 dye, to poly(dA) · poly(dT)
  • May 1, 2017
  • Biophysics
  • Yu D Nechipurenko + 6 more

Monomeric bis-benzimidazole (MB) is an analog of the Hoechst 33258 dye. The enthalpy and entropy of MB binding were evaluated by analyzing the calorimetric data on MB reverse titration with poly(dA) · poly(dT). A mathematical model was developed to estimate the thermodynamic parameters of binding on the basis of calorimetric data. The results agree well with spectrophotometric data on the binding of analogous compounds. The model was used to estimate the parameters of binding with poly(dA) · poly(dT) for dimeric bis-benzimidazole (DB), which consists of two bis-benzimidazole monomers linked via a flexible chain. The ligand was assumed to produce different types of complexes with the polymer.

  • Research Article
  • Cite Count Icon 2
  • 10.1002/cem.3619
On the Replicability of the Thermodynamic Modeling of Spectroscopic Titration Data in the Nickel(II) En System
  • Oct 23, 2024
  • Journal of Chemometrics
  • Fenton C Lawler + 10 more

Characterizing complicated solution phase systems in situ requires advanced modeling techniques to capture the intricate balances between the many chemical species. Due to the error inherent in any scientific measurement, a spectrophotometric titration experiment with nickel(II) and ethylenediamine (en) was repeated six times using an autotitrator to test the replicability of the data and the consistency of the resulting thermodynamic model. All six datasets could be modeled very tightly ( R 2 &gt; 99.9999%) with the following eight complexes: [Ni] 2+ , [Ni 2 en] 4+ , [Nien] 2+ , [Ni 2 en 3 ] 4+ , [Nien 2 ] 2+ , [Ni 2 en 5 ] 4+ , [Nien 3 ] 2+ , and [Nien 6 ] 2+ . The logK values for the stepwise associative reactions agree with existing literature values for the majority species ([Nien n = 1–3 ] 2+ ) and matched expectations for the minority species; 95% confidence intervals for each logK value were determined via bootstrapping, which quantifies the variability in the binding constant value that is supported by a given dataset. The repeated experiments, which could not be successfully concatenated together, demonstrate that replication is crucial to capturing all the variability in the logK values. Conversely, bootstrapped confidence intervals across multiple experiments can be readily combined to generate an appropriate range for an experimentally determined binding constant.

  • Research Article
  • Cite Count Icon 20
  • 10.1289/ehp.1102453
Statistical Methods to Study Timing of Vulnerability with Sparsely Sampled Data on Environmental Toxicants
  • Dec 8, 2010
  • Environmental Health Perspectives
  • Brisa Ney Sánchez + 3 more

Statistical Methods to Study Timing of Vulnerability with Sparsely Sampled Data on Environmental Toxicants

  • Research Article
  • Cite Count Icon 21
  • 10.1185/030079906x100230
Guidelines for selecting among different types of bootstraps
  • Mar 21, 2006
  • Current Medical Research and Opinion
  • Onur Baser + 2 more

ABSTRACTBackground: The bootstrap has become very popular in health economics. Its success lies in the ease of estimating sampling distribution, standard error and confidence intervals with few or no assumptions about the distribution of the underlying population.Objective: The purpose of this paper is three-fold: (1) to provide an overview of four common bootstrap techniques for readers who have little or no statistical background; (2) to suggest a guideline for selecting the most applicable bootstrap technique for your data; and (3) to connect guidelines with a real world example, to illustrate how different bootstraps behave in one model, or in different models.Results: The assumptions of homoscedasticity and normality are key to selecting the best bootstrapping technique. These assumptions should be tested before applying any bootstrapping technique. If homoscedasticity and normality hold, then parametric bootstrapping is consistent and efficient. Paired and wild bootstrapping are consistent under heteroscedasticity and non-normality assumptions.Conclusion: Selecting the correct type of bootstrapping is crucial for arriving at efficient estimators. Our example illustrates that if we selected an inconsistent bootstrapping technique, results could be misleading. An insignificant effect of controller treatment on total health expenditures among asthma patients would have been found significant and negative by an improperly chosen bootstrapping technique, regardless of the type of model chosen.

  • Research Article
  • Cite Count Icon 4
  • 10.1016/j.wneu.2023.04.008
Development of a Prediction Model for Cranioplasty Implant Survival Following Craniectomy
  • Apr 8, 2023
  • World Neurosurgery
  • Vita M Klieverik + 3 more

Cranioplasty after craniectomy can result in high rates of postoperative complications. Although determinants of postoperative outcomes have been identified, a prediction model for predicting cranioplasty implant survival does not exist. Thus, we sought to develop a prediction model for cranioplasty implant survival after craniectomy. We performed a retrospective cohort study of patients who underwent cranioplasty following craniectomy between 2014 and 2020. Missing data were imputed using multiple imputation. For model development, multivariable Cox proportional hazards regression analysis was performed. To test whether candidate determinants contributed to the model, we performed backward selection using the Akaike information criterion. We corrected for overfitting using bootstrapping techniques. The performance of the model was assessed using discrimination and calibration. A total of 182 patients were included (mean age, 43.0 ± 19.7 years). Independent determinants of cranioplasty implant survival included the indication for craniectomy (compared with trauma-vascular disease: hazard ratio [HR], 0.65 [95% confidence interval (CI), 0.36-1.17]; infection: HR, 0.76 [95% CI, 0.32-1.80]; tumor: HR, 1.40 [95% CI, 0.29-6.79]), cranial defect size (HR, 1.01 per cm2 [95% CI, 0.73-1.38]), use of an autologous bone flap (HR, 1.63 [95% CI, 0.82-3.24]), and skin closure using staples (HR, 1.42 [95% CI, 0.79-2.56]). The concordance index of the model was 0.60 (95% CI, 0.47-0.73). We have developed the first prediction model for cranioplasty implant survival after craniectomy. The findings from our study require external validation and deserve further exploration in future studies.

  • Research Article
  • Cite Count Icon 31
  • 10.1177/2396987318754591
Development and validation of the Dutch Stroke Score for predicting disability and functional outcome after ischemic stroke: A tool to support efficient discharge planning.
  • Jun 1, 2018
  • European Stroke Journal
  • Inger R De Ridder + 11 more

IntroductionWe aimed to develop and validate a prognostic score for disability at discharge and functional outcome at three months in patients with acute ischemic stroke based on clinical information available on admission.Patients and methodsThe Dutch Stroke Score (DSS) was developed in 1227 patients with ischemic stroke included in the Paracetamol (Acetaminophen) In Stroke study. Predictors for Barthel Index (BI) at discharge (‘DSS-discharge’) and modified Rankin Scale (mRS) at three months (‘DSS-3 months’) were identified in multivariable ordinal regression. The models were internally validated with bootstrapping techniques. The DSS-3 months was externally validated in the PRomoting ACute Thrombolysis in Ischemic StrokE study (1589 patients) and the Preventive Antibiotics in Stroke Study (2107 patients). Model performance was assessed in terms of discrimination, expressed by the area under the receiver operating characteristic curve (AUC), and calibration.ResultsAt model development, the strongest predictors of Barthel Index at discharge were age per decade over 60 (odds ratio = 1.55, 95% confidence interval (CI) 1.41–1.68), National Institutes of Health Stroke Scale (odds ratio = 1.24 per point, 95% CI 1.22–1.26) and diabetes (odds ratio = 1.62, 95% CI 1.32–1.91). The internally validated AUC was 0.76 (95% CI 0.75–0.79). The DSS-3 months, additionally consisting of previous stroke and atrial fibrillation, performed similarly at internal (AUC 0.75, 95% CI 0.74–0.77) and external validation (AUC 0.74 in PRomoting ACute Thrombolysis in Ischemic StrokE (95% CI 0.72–0.76) and 0.69 in Preventive Antibiotics in Stroke Study (95% CI 0.69–0.72)). Observed outcome was slightly better than predicted.Discussion: The DSS had satisfactory performance in predicting BI at discharge and mRS at three months in ischemic stroke patients.ConclusionIf further validated, the DSS may contribute to efficient stroke unit discharge planning alongside patients' contextual factors and therapeutic needs.

Save Icon
Up Arrow
Open/Close