Abstract

The logic of scientific justification is a central problem in the philosophy of science. Bayesianism is usually taken as the leading theory in this area. After a brief review of Bayesian account of scientific justification and learning theorists' objection against Bayesianism, this paper proposes an argument defending Bayesianism. It is shown that Bayesian conditionalization has the necessary equipment to capture the idea of elimination induction, which functions as an indispensible component in a satisfactory account of scientific justification. Bayesianism has two central ideas: the degrees of scientists' belief about scientific theories should be quantified in a way that satisfies the principles of probability calculus; and learning from experience is through conditionalization which makes essential use of the Bayes' theorem. Learning theorists criticized that there are certain crucial characterizations of scientific justification that can be captured by their account of logical reliability or mean-ends analysis, but not by Bayesian strategy of the explication of the relationship of confirmation. (Kelly and Glymour, 2004) The characterizations they listed are the intrinsic difficulty and efficiency of Bayesian conditionalization in the process of approaching the truth. They concluded that Bayesian updating is just one method or strategy among many that may or may not be justified depending on how efficiently it answers the question at hand. My response to this line of objection is that Bayesian conditionalization could also answer certain crucial problem about the justification of scientific theories which could not be answered by the formal learning theory. 1, the degree to which a given piece of evidence support a theory can be measured by how many possible models in which the theory is right under our knowledge before the discovery of evidence E could be eliminated by E. The idea of explicating evidential support in terms of the elimination of the failing circumstances of the theory can be called 'elimination induction'. Bayesianism has its own criteria in determining the degree of elimination: if the proportion of the eliminated models of the theory in the eliminated models under our knowledge is smaller than the prior probability of the theory, then the piece of evidence confirms the theory; and vice versa. Formal learning theory could not provide the degree of elimination in cases where the hypothesis should not be changed under a new piece of evidence. 2, people sometimes care more about the evidential support of one piece of evidence on a given theory. This is especially true in scientific disciplines in which evidence is hard to gain. If a theory survives the elimination of evidence E, it is legitimate to place more confidence on the theory, because the theory turns out to be not wrong in certain circumstances where it could have been wrong. The evidential support of E on the theory does not necessarily mean that the theory contains more truth or has more true contents. In cases where the truth lies in a theory with little support by a specific piece of evidence E, the theory itself is subject to significant modification because it also contains many falsehoods that could be eliminated by Ehe abstract should summarize the contents of the paper and should contain at least 70 and at most 150 words. It should be set in 9-point font size and should be inset 1.0 cm from the right and left margins. There should be two blank (10-point) lines before and after the abstract. This document is in the required format.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call