Abstract

Abstract It has recently been argued that a non-Bayesian probabilistic version of inference to the best explanation (IBE*) has a number of advantages over Bayesian conditionalization (Douven [2013]; Douven and Wenmackers [2017]). We investigate how IBE* could be generalized to uncertain evidential situations and formulate a novel updating rule IBE**. We then inspect how it performs in comparison to its Bayesian counterpart, Jeffrey conditionalization (JC), in a number of simulations where two agents, each updating by IBE** and JC, respectively, try to detect the bias of a coin while they are only partially certain what side the coin landed on. We show that IBE** more often prescribes high probability to the actual bias than JC. We also show that this happens considerably faster, that IBE** passes higher thresholds for high probability, and that it in general leads to more accurate probability distributions than JC. 1Introduction2Generalizing Inference to the Best Explanation to Uncertain Evidential Situations3Detecting the Bias of a Coin4Overall Performance of IBE** versus Jeffrey Conditionalization5Speed of Convergence6The Threshold for High Subjective Probability7Epistemic Inaccuracy8Conclusions

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call