Abstract

Suppose your credence is divided between two moral theories – Theory T and Theory U. According to T, you have more reason to do Action A than you have to do Action B. According to U, you have more reason to do B than you have to do A. What is it rational to do in a situation in which A and B are the two possible actions? Many have argued that what it’s rational to do depends on two things: (a) how your credence is distributed between the theories, and (b) how the difference in moral value between A and B if T is true compares to the difference in moral value between B and A if U is true. But this answer prompts a further question: How do we make the intertheoretic comparisons of value differences mentioned in (b)? The theories themselves seem not to provide the resources required to do so. In Moral Uncertainty and Its Consequences, Ted Lockhart argues that intertheoretic comparisons of value differences are possible if we adopt a principle he calls the “Principle of Equity among Moral Theories”. I argue on several grounds that this principle is untenable, consider some rejoinders on Lockhart’s behalf, and conclude that these rejoinders do not succeed. Suppose your credence is divided between two moral theories – Theory T and Theory U. According to T, you have more reason to do Action A than you have to do Action B. According to U, you have more reason to do B than you have to do A. Many real-life cases fall under this schema. For example, I might have some credence in a retributive theory of punishment and some credence in a nonretributive theory of punishment. According to the first theory, it may be better to subject a criminal to very harsh treatment than to rehabilitate him. According to the second theory, the reverse may be true. Or I might have some credence in a traditional consequentialist theory, and some credence in a non-consequentialist 2 theory. The first theory might recommend killing one person to save five people, while the second theory might recommend against it What is it rational for you to do when you’re uncertain between conflicting moral theories? This is not the old question of what you should do, given some moral theory, when you are uncertain about the non-moral facts. The question I’m asking takes us back one step; what is it rational to do when you’re uncertain regarding the theories themselves? In this paper, I’m going to consider a problem that arises when we try to answer this question, and then evaluate one solution to that problem. One possible answer to the question is: Act in accordance with the theory in which you have the highest credence. That is, if your degree of belief is highest in a theory according to which Action A is better than Action B, then you should do A rather than B. But we should be suspicious of this answer, since the parallel answer in the non-moral case seems so clearly mistaken. For suppose that I am deciding whether to drink a cup of coffee. I have a degree of belief of .2 that the coffee is mixed with a deadly poison, and a degree of belief of .8 that it’s perfectly safe. If I act on the hypothesis in which I have the highest credence, I will drink the coffee. But this seems like a bad call, since the downside of death is so much greater than the 1 This is a very under-addressed problem in moral philosophy. The only recent publications to address the issue are Hudson (1989), Oddie (1995), Lockhart (2000), Weatherson (2002), Sepielli (2006), Ross (2006), Guerrero (2007), and Sepielli (2009). A very similar debate – about so-called ‘reflex principles’ – occupied a central place in Early Modern Catholic moral theology. The most notable contributors to this debate were Bartolome de Medina (1577), Blaise Pascal (1656-57), and St. Alphonsus Liguori (1755). The various positions are helpfully summarized in Prummer (1957), The Catholic Encyclopedia (1913), and The New Catholic

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call