Abstract

Sensitivity plots are meant to answer the question, “If the experiment were to see what is expected, what kind of confidence limits could the experiment be expected to set?” In practice this may be done naively by assuming the experiment will produce the data vector V which is the most likely data vector predicted for parameters X (for example, the parameters may be oscillation parameters, and the data vector may be the number of events in angular bins) and drawing the C.L. curve in parameter space for experimental result V. It is often the case that some other parameters (that are “close” to X in parameter space) will predict a most likely data vector that is extremely similar to V. In that case, if the experiment produced data vector V, the theory would be excluded only at a very small C.L. (e.g., perhaps 5%). However, in real experiments with limited statistics, if the true parameters of nature were X, the experiment would measure a data vector that is fluctuated from V, and the theory would probably be excluded at a much higher C.L. (typically greater than 50%). Thus, the naive method does not do a good job of answering the essential question. I will present a simple algorithm that takes fluctuations into account when drawing sensitivity curves, and illustrate it with an example from atmospheric neutrino oscillations. As far as I know, the concept of sensitivity was first formally defined in the 1998 paper of Feldman and Cousins [1], where it was defined as “the average upper limit that would be obtained by an ensemble of experiments with the expected background and no true signal”. Informally, however, the definition is broader and includes sensitivity to non-null hypotheses. Perhaps a good statement of the broader informal usage of the word sensitivity is “the limit that an experiment would set if it saw what was expected”, where the expectation may refer to a null or a non-null hypothesis. For example, a proposal for a new neutrino oscillations experiment may include a plot labeled “sensitivity” which shows the limit that would be set if the experiment saw the data predicted by the Super Kamiokande best fit parameters. There is also at least one case in which a paper presenting an experimental result [2] included a graph labeled “sensitivity”, which showed the limit that would have been expected a priori assuming the true parameters of nature were the best fit parameters which had been obtained by that very experiment. (In that case, the sensitivity was shown because the data vector that was actually obtained had a fairly poor fit to all hypotheses, including the best fit. The 90% C.L. curve was thus much more restrictive than the a priori expectation, and the experimenters felt it was only honest to point this out on the results plot.) The point of this paper is to point out a simple trap the experimentalist may fall into when computing so-called sensitivity curves for complicated experiments. Let us consider an atmospheric neutrino oscillations experiment in which events are accumulated in a number of angular bins. Using a model for neutrino flux and cross sections, and for a particular oscillations hypothesis (i.e. a particular choice of oscillations parameters), the experimentalist may predict the number of events that will show up in each bin. Then she may say to herself, “Let me assume that the experiment measures exactly the predicted number in each bin. What confidence limits curve would I then draw in parameter space?” Figure 1 shows a possible result. The figure is based on a rough approximation to the MACRO experiment, with a fairly short running period of just a couple of years. The “data” from which the graph

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call