Abstract

In statistical analysis of measurement results it is often necessary to compute the range $$[\underline V ,\,\overline V]$$ of the population variance $$V = \frac{1}{n}\, \cdot \,\sum\limits_{i = 1}^n (x_i \, - \,E)^2 \,\left({\rm where}\,E = \frac{1}{n}\, \cdot \,\sum\limits_{i = 1}^n {x_i }\,\right)$$ when we only know the intervals $$[\tilde x_i - \Delta _i,\,\tilde x_i \, + \,\Delta _i]$$ of possible values of the x i . While $$\underline {V}$$ can be computed efficiently, the problem of computing $$\overline {V}$$ is, in general, NP-hard. In our previous paper “Population Variance under Interval Uncertainty: A New Algorithm” (Reliable Computing 12 (4) (2006), pp. 273–280) we showed that in a practically important case we can use constraints techniques to compute $$\overline {V}$$ in time O(n · log(n)). In this paper we provide new algorithms that compute $$\underline {V}$$ (in all cases) and $$\overline {V}$$ (for the above case) in linear time O(n). Similar linear-time algorithms are described for computing the range of the entropy $$S = - \sum\limits_{i = 1}^n {p_i\,\cdot\,{\rm log} (p_i )}$$ when we only know the intervals $${\bf P}_i \, = \,[p_{-i},\,\bar p_i]$$ of possible values of probabilities p i . In general, a statistical characteristic ƒ can be more complex so that even computing ƒ can take much longer than linear time. For such ƒ, the question is how to compute the range $$[\underline y,\,\overline y]$$ in as few calls to ƒ as possible. We show that for convex symmetric functions ƒ, we can compute $$\bar {y}$$ in n calls to ƒ.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call