Abstract

AbstractThe mathematical theory of probabilities does not refer to the notion of an individual random object. For example, when we toss a fair coin n times, all \(2^n\) bit strings of length n are equiprobable outcomes and none of them is more “random” than others. However, when testing a statistical model, e.g., the fair coin hypothesis, we necessarily have to distinguish between outcomes that contradict this model, i.e., the outcomes that convince us to reject this model with some level of certainty, and all other outcomes. The same question arises when we apply randomness tests to some hardware random bits generator.A similar distinction between random and non-random objects appears in algorithmic information theory. Algorithmic information theory defines the notion of an individual random sequence and therefore splits all infinite bit sequences into random and non-random ones. For finite sequences there is no sharp boundary. Instead, the notion of randomness deficiency can be defined, and sequences with greater deficiency are considered as “less random” ones. This definition can be given in terms of randomness tests that are similar to the practical tests used for checking (pseudo)random bits generators. However, these two kinds of randomness tests are rarely compared and discussed together.In this survey we try to discuss current methods of producing and testing random bits, having in mind algorithmic information theory as a reference point. We also suggest some approach to construct robust practical tests for random bits.

Highlights

  • The mathematical theory of probabilities does not refer to the notion of an individual random object

  • Probability theory is nowadays considered as a special case of measure theory: a random variable is a measurable function defined on some probability space that consists of a set Ω, some σ-algebra of the subsets of Ω, and some σ-additive measure defined on this σ-algebra

  • The set Bn can be considered as a probability space, and ith coin tossing is represented by a random variable ξi defined on this space: ξi(x1x2 . . . xn) = xi

Read more

Summary

Testing a statistical hypothesis

Probability theory is nowadays considered as a special case of measure theory: a random variable is a measurable function defined on some probability space that consists of a set Ω, some σ-algebra of the subsets of Ω, and some σ-additive measure defined on this σ-algebra. The first problem, mentioned earlier, is that in most cases every individual outcome x ∈ X has negligible probability, so the singleton {x} is a test that can be used to reject P. The rejection of the null hypothesis means that we do not consider the data as a random fluctuation, and claim that the new drug has at least some effect.. The rejection of the null hypothesis means that we do not consider the data as a random fluctuation, and claim that the new drug has at least some effect.1 In this approach, the choice of the threshold value obviously should depend on the importance of the question we consider. Other people point out that fixing a threshold, whatever it is, is a bad practice [1]

Randomness tests
Randomness as incompressibility
Families of tests and continuous tests
Where do we get randomness tests?
Secondary tests
Diehard
Dieharder
NIST test suite
How to make a robust test
10 Hardware random generators
11 Random source and post-processing
12 What we would like to have
13 What we have
14 Final remarks
Findings
35. TrueRNG
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.