Abstract
We discuss the effect of large positive correlations in the combinations of several measurements of a single physical quantity using the Best Linear Unbiased Estimate (BLUE) method. We suggest a new approach for comparing the relative weights of the different measurements in their contributions to the combined knowledge about the unknown parameter, using the well-established concept of Fisher information. We argue, in particular, that one contribution to information comes from the collective interplay of the measurements through their correlations and that this contribution cannot be attributed to any of the individual measurements alone. We show that negative coefficients in the BLUE weighted average invariably indicate the presence of a regime of high correlations, where the effect of further increasing some of these correlations is that of reducing the error on the combined estimate. In these regimes, we stress that assuming fully correlated systematic uncertainties is not a truly conservative choice, and that the correlations provided as input to BLUE combinations need to be assessed with extreme care instead. In situations where the precise evaluation of these correlations is impractical, or even impossible, we provide tools to help experimental physicists perform more conservative combinations.
Highlights
To quantify the “relative importance” of each measurement in its contribution to the combined knowledge about the measured physical quantity, its coefficient in the Best Linear Unbiased Estimate (BLUE) weighted average is traditionally used
We show that negative coefficients in the BLUE weighted average invariably indicate the presence of very high correlations, whose marginal effect is that of reducing the error on the combined estimate, rather than increasing it
We stress that taking systematic uncertainties to be fully (i.e. 100 %) correlated is not a conservative assumption, and we argue that the correlations provided as inputs to BLUE combinations need to be assessed with extreme care
Summary
To quantify the “relative importance” of each measurement in its contribution to the combined knowledge about the measured physical quantity, its coefficient in the BLUE weighted average is traditionally used. The relative importances of the n measurements sum up to 1 by definition, n i =1 In our opinion, this procedure is an artefact that is conceptually wrong and suffers from two important limitations: first, it is not internally self-consistent and may lead to numerical conclusions which go against common sense; second, it does not help to understand in which way the results with negative coefficients contribute to reducing the uncertainties on the combined estimates. What is rather surprising is that the “relative importance” of yA computed using normalised absolute values of the BLUE coefficients is very different in the two cases: RIA(combining A, B) In our opinion, this is an internal inconsistency of Eq 2, as common sense suggests that the relative contribution of yA to the knowledge about Y is the same in both combinations. We will propose and discuss our definitions of intrinsic and marginal information weights using the well-established concept of Fisher information
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have