Abstract

McGee (1985) argues that it is sometimes reasonable to accept both x and x->(y->z) without accepting y->z, and that modus ponens is therefore invalid for natural language indicative conditionals. Here, we examine McGee's counterexamples from a Bayesian perspective. We argue that the counterexamples are genuine insofar as the joint acceptance of x and x->(y->z) at time t does not generally imply constraints on the acceptability of y->z at t, but we use the distance-based approach to Bayesian learning to show that applications of modus ponens are nevertheless guaranteed to be successful in an important sense. Roughly, if an agent becomes convinced of the premises of a modus ponens argument, then she should likewise become convinced of the argument's conclusion. Thus we take McGee's counterexamples to disentangle and reveal two distinct ways in which arguments can convince. Any general theory of argumentation must take stock of both.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.