Abstract

We study Martin-Löf random (ML-random) points on computable probability measures on sample and parameter spaces (Bayes models). We consider variants of conditional randomness defined by ML-randomness on Bayes models and those of conditional blind randomness. We show that variants of conditional blind randomness are ill-defined from the Bayes statistical point of view. We prove that if the sets of random sequences of uniformly computable parametric models are pairwise disjoint then there is a consistent estimator for the model. Finally, we present an algorithmic solution to a classical problem in Bayes statistics, i.e. the posterior distributions converge weakly to almost all parameters if and only if the posterior distributions converge weakly to all ML-random parameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call