Abstract

ABSTRACT: In statistics, to evaluate the significance of a result, one of the most used methods is the statistical hypothesis test. Using this theory, the fundamental problem of statistics can be expressed as follows: A statistical data does not represent useful information, but becomes useful information only when it is shown that it was not obtained Consequently, according to this point of view, among the hypotheses that perform the same prediction, we must choose the result that has a lower probability of being produced randomly. Therefore, the fundamental aspect of this approach is to calculate correctly this probability value. This problem is addressed by redefining what is meant by hypothesis. The traditional approach considers the hypothesis as the set of rules that actively participate in the forecast. Instead, we consider as hypotheses the sum of all the hypotheses made, also considering the hypotheses preceding the one used. Therefore, each time a prediction is made, our hypothesis increases in complexity and consequently increases its ability to adapt to a random data set. In this way, the complexity of a hypothesis can be precisely determined only if all previous attempts are known. Consequently, Occam's razor principle no longer has a general value, but its application depends on the information we have on the tested hypotheses.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call