Abstract

Increasingly encompassing models have been suggested for our world. Theories range from generally accepted to increasingly speculative to apparently bogus. The progression of theories from ego- to geo- to helio-centric models to universe and multiverse theories and beyond was accompanied by a dramatic increase in the sizes of the postulated worlds, with humans being expelled from their center to ever more remote and random locations. Rather than leading to a true theory of everything, this trend faces a turning point after which the predictive power of such theories decreases (actually to zero). Incorporating the location and other capacities of the observer into such theories avoids this problem and allows to distinguish meaningful from predictively meaningless theories. This also leads to a truly complete theory of everything consisting of a (conventional objective) theory of everything plus a (novel subjective) observer process. The observer localization is neither based on the controversial anthropic principle, nor has it anything to do with the quantum-mechanical observation process. The suggested principle is extended to more practical (partial, approximate, probabilistic, parametric) world models (rather than theories of everything). Finally, I provide a justification of Ockham’s razor, and criticize the anthropic principle, the doomsday argument, the no free lunch theorem, and the falsifiability dogma.

Highlights

  • This paper uses an information-theoretic and computational approach for addressing the philosophical problem of judging theories in physics

  • Since we essentially identify a Theory of Everything (ToE) with a program generating a universe, we need to fix some general purpose programming language on a general purpose computer

  • Universal Turing Machine (UTM)(s, uq1:∞ ) = osq where we have extended the definition of UTM to allow access to an extra infinite input stream uq1:∞

Read more

Summary

Introduction

This paper uses an information-theoretic and computational approach for addressing the philosophical problem of judging theories (of everything) in physics. The discussion of the (physical, universal, random, and all-a-carte) multiverse theories has shown that pushing this progression too far will at some point harm predictive power. We saw that this has to do with the increasing difficulty to localize the observer. One could specify the (x,y,z,t) coordinate of the observer, which requires more but still only very few bits These localization penalties are tiny compared to the difference in predictive power (to be quantified later) of the various theories (ego/geo/helio/cosmo). I introduce more realistic observers with limited perception ability

Objective
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call