Abstract

McGee (1985) argues that it is sometimes reasonable to accept both x and x->(y->z) without accepting y->z, and that modus ponens is therefore invalid for natural language indicative conditionals. Here, we examine McGee's counterexamples from a Bayesian perspective. We argue that the counterexamples are genuine insofar as the joint acceptance of x and x->(y->z) at time t does not generally imply constraints on the acceptability of y->z at t, but we use the distance-based approach to Bayesian learning to show that applications of modus ponens are nevertheless guaranteed to be successful in an important sense. Roughly, if an agent becomes convinced of the premises of a modus ponens argument, then she should likewise become convinced of the argument's conclusion. Thus we take McGee's counterexamples to disentangle and reveal two distinct ways in which arguments can convince. Any general theory of argumentation must take stock of both.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call