Abstract

Many people think that if you're uncertain about which moral theory is correct, you ought to maximize the expected choice-worthiness of your actions. This idea presupposes that the strengths of our moral reasons are comparable across theories – for instance, that our reasons to create new people, according to total utilitarianism, can be stronger than our reasons to benefit an existing person, according to a person-affecting view. But how can we make sense of such comparisons? In this article, I introduce a constructivist account of intertheoretic comparisons. On this account, such comparisons don't hold independently of facts about morally uncertain agents. They're simply the result of an ideal deliberation in terms of certain epistemic norms about what you ought to do in light of your uncertainty. If I'm right, this account is metaphysically more parsimonious than some existing proposals, and yet has plausible and strong implications.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.