Abstract

ABSTRACT: In statistics, to evaluate the significance of a result, one of the most used methods is the statistical hypothesis test. Using this theory, the fundamental problem of statistics can be expressed as follows: A statistical data does not represent useful information, but becomes useful information only when it is shown that it was not obtained Consequently, according to this point of view, among the hypotheses that perform the same prediction, we must choose the result that has a lower probability of being produced randomly. Therefore, the fundamental aspect of this approach is to calculate correctly this probability value. This problem is addressed by redefining what is meant by hypothesis. The traditional approach considers the hypothesis as the set of rules that actively participate in the forecast. Instead, we consider as hypotheses the sum of all the hypotheses made, also considering the hypotheses preceding the one used. Therefore, each time a prediction is made, our hypothesis increases in complexity and consequently increases its ability to adapt to a random data set. In this way, the complexity of a hypothesis can be precisely determined only if all previous attempts are known. Consequently, Occam's razor principle no longer has a general value, but its application depends on the information we have on the tested hypotheses.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.