John of Holland on Logical Consequence. The Late-Fourteenth-Century Development of the British Logical Tradition
The treatise on logical consequence attributed to John of Holland and composed around 1370 is preserved in two currently known copies, namely Kraków, Biblioteka Jagiellońska, ms. 2660, fols. 24r–36r and Wien, Österreichische Staatsbibliothek, ms. 4698, fols. 138v–145v. While neither copy is complete, the missing parts do not overlap, and thus the content of the treatise can be reconstructed. The treatise presents an account of validity based on the containment of conclusions in premises, also incorporating the substitutional account of validity, and modal intuitions regarding truth-preservation as a necessary condition of validity. From the medieval-literature perspective, it can be viewed as a collection of sophisms relating to various rules of inference, including paradoxical contexts such as semantic paradoxes and ex contradictione quodlibet.
- Research Article
1
- 10.1007/s13218-019-00597-y
- May 31, 2019
- KI - Künstliche Intelligenz
This paper considers the possibility of designing AI that can learn logical or non-logical inference rules from data. We first provide an abstract framework for learning logics. In this framework, an agent $${{{\mathcal {A}}}}$$ provides training examples that consist of formulas S and their logical consequences T. Then a machine $${{{\mathcal {M}}}}$$ builds an axiomatic system that makes T a consequence of S. Alternatively, in the absence of an agent $$\mathcal{A}$$ , a machine $${{{\mathcal {M}}}}$$ seeks an unknown logic underlying given data. We next consider the problem of learning logical inference rules by induction. Given a set S of propositional formulas and their logical consequences T, the goal is to find deductive inference rules that produce T from S. We show that an induction algorithm LF1T, which learns logic programs from interpretation transitions, successfully produces deductive inference rules from input data. Finally, we consider the problem of learning non-logical inference rules. We address three case studies for learning abductive inference, frame axioms, and conversational implicature. Each case study uses machine learning techniques together with metalogic programming.
- Book Chapter
1
- 10.1007/978-3-319-40566-7_13
- Jan 1, 2016
This paper studies learning inference by induction. We first consider the problem of learning logical inference rules. Given a set S of propositional formulas and their logical consequences T, the goal is to find deductive inference rules that produce T from S. We show that an induction algorithm LF1T, which learns logic programs from interpretation transitions, successfully produces deductive inference rules from input transitions. Next we consider the problem of learning non-logical inference rules. We address three case studies for learning abductive inference, frame axioms and conversational implicature by induction. The current study provides a preliminary approach to the problem of learning inference to which little attention has been paid in machine learning and ILP.
- Single Book
60
- 10.1016/s0049-237x(97)x8001-2
- Jan 1, 1997
Admissibility of Logical Inference Rules
- Research Article
45
- 10.1016/j.apal.2008.03.001
- May 2, 2008
- Annals of Pure and Applied Logic
Linear temporal logic with until and next, logical consecutions
- Book Chapter
- 10.1007/978-3-0348-0862-0_5
- Jan 1, 2014
From the viewpoint of logic there are two types of knowledge about a specific domain. One is the given knowledge, the axiom system, and the other consists of the logical consequences that can be derived from the axioms. The logical consequences are propositions deduced from the axioms by using inference rules, which are independent of the domain. Therefore, the question whether a given proposition is a logical consequence only depends on the axioms.KeywordsPoint EquationAxiom SystemProposition VariableAxiomatic ApproachProof TreeThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
- Book Chapter
6
- 10.1093/oso/9780195104271.003.0010
- Jul 18, 1996
One of the goals of logical analysis is to construct mathematical models of various practices of deductive inference. Traditionally, this is done by means of giving semantics and rules of inference for carefully specified formal languages. While this has proved to be an extremely fruitful line of analysis, some facets of actual inference are not accurately modeled by these techniques. The example we have in mind concerns the diversity of types of external representations employed in actual deductive reasoning. Besides language, these include diagrams, charts, tables, graphs, and so on. When the semantic content of such non-linguistic representations is made clear, they can be used in perfectly rigorous proofs. A simple example of this is the use of Venn diagrams in deductive reasoning. If used correctly, valid inferences can be made with these diagrams, and if used incorrectly, they can be the source of invalid inferences; there are standards for their correct use. To analyze such standards, one might construct a formal system of Venn diagrams where the syntax, rules of inference, and notion of logical consequence have all been made precise and explicit, as is done in the case of first-order logic. In this chapter, we will study such a system of Venn diagrams, a variation of Shin’s system VENN formulated and studied in Shin [1991] and Shin [1991a] (see Chapter IV of this book). Shin proves a soundness theorem and a finite completeness theorem (if ∆ is a finite set of diagrams, D is a diagram, and D is a logical consequence of ∆ , then D is provable from ∆ ). We extend Shin’s completeness theorem to the general case: if ∆ is any set of diagrams, D is a, diagram, and D is a logical consequence of ∆. then D is provable from ∆. We hope that the fairly simple diagrammatic system discussed here will help motivate closer study of the use of more complicated diagrams in actual inference.
- Book Chapter
12
- 10.1007/978-94-017-9673-6_23
- Jan 1, 2015
The paper concerns transparent theories of truth, i.e. theories treating ‘ ‘ϕ’ is true’ as fully intersubstitutable with ϕ, and examines what the prospects are of maintaining a suitably refined version of transparency in view of the problem posed by the semantic paradoxes. In particular, three kinds of transparent theories—theories denying the law of excluded middle, theories denying the law of non-contradiction and theories denying the metarule of contraction—are compared with respect to the two most prominent semantic paradoxes: the Liar and Curry’s. It is argued that there are versions of the Liar paradox that do not rely on the law of excluded middle or the law of non-contradiction, and that such versions are blocked by the first two kinds of theories only by (implausibly) severing important connections between logical consequence and negation. Similarly, it is argued that Curry’s paradox does not rely on the law of excluded middle or the law of non-contradiction, and that it is blocked by the first two kinds of theories only by (implausibly) severing important connections between logical consequence and the conditional. All the paradoxes discussed are shown however to rely on the metarule of contraction, and so the third kind of theory is revealed to have the advantage of offering a unified solution to such paradoxes.KeywordsContraction - Curry’s paradoxDeduction theoremLaw of excluded middleLaw of non-contradictionLiar paradoxReductio
- Book Chapter
24
- 10.1007/11753728_33
- Jan 1, 2006
As specifications and verifications of concurrent systems employ Linear Temporal Logic (LTL), it is increasingly likely that logical consequence in LTL will be used in description of computations and parallel reasoning. We consider the linear temporal logic \(\mathcal{LTL^{U,B}_{N,N^{-1}} (Z)}\) extending the standard LTL by operations B (before) and N − 1 (previous). Two sorts of problems are studied: (i) satisfiability and (ii) description of logical consequence in \(\mathcal{LTL^{U,B}_{N,N^{-1}} (Z)}\) via admissible logical consecutions (inference rules). The model checking for LTL is a traditional way of studying such logics. Most popular technique based on automata was developed by M.Vardi (cf. [39, 6]). Our paper uses a reduction of logical consecutions and formulas of LTL to consecutions of a uniform form consisting of formulas of temporal degree 1. Based on technique of Kripke structures, we find necessary and sufficient conditions for a consecution to be not admissible in \(\mathcal{LTL^{U,B}_{N,N^{-1}} (Z)}\). This provides an algorithm recognizing consecutions (rules) admissible in \(\mathcal{LTL^{U,B}_{N,N^{-1}} (Z)}\) by Kripke structures of size linear in the reduced normal forms of the initial consecutions. As an application, this algorithm solves also the satisfiability problem for \(\mathcal{LTL^{U,B}_{N,N^{-1}} (Z)}\).Keywordslogic in computer sciencealgorithmslinear temporal logiclogical consequenceinference rulesconsecutionsadmissible rules
- Research Article
6
- 10.1111/j.1468-0114.1989.tb00386.x
- Dec 1, 1989
- Pacific Philosophical Quarterly
A CONCEPTION OF TARSKIAN LOGIC* BY GILA SHER Which logic is the right logic? In a paper so titled Leslie Tharp’ poses the question: What properties should a logical system have? In particular: ls standard lst-order logic the right logic? The question asked in this paper is somewhat less general: Which logic is Tarski’s logic? More precisely: Are the basic principles of Tarskian logic exhausted by the standard lst- ordcr s_vstetn or does it take a new, extended logic, to fully realize them? (By 'Tarskian logic‘ I here understand the modern semantic conception of logic as it evolved out of Tarski’s theory.) To answer questions on the adequacy of a system of logic, Tharp says, it is essential that we acquire first an idea of “the role logic is expected to play.” I think Tharp’s point is important, and with this guideline in mind I will turn to Tarski's early work on the foundations of semantics.’ I. The Task of Logic. the Origins of Semantics ln q1 he Concept of Truth in Formalized Languagesq, “On the Concept of Logical Consequence” and other writings‘ Tarski presents logical semantics as providing (i) a definition of the general concept of truth for formalized languages, and (ii) definitions of the logical concepts ‘logical truth‘, ‘logical consequence‘, ‘consistency’, etc., for such languages. The main purpose of (i) is to secure metalogic against semantic paradoxes. Tarski worried lest the uncritical use of semantic concepts prior to his work concealed an inconsistency: a hidden fallacy would undermine the entire venture. He therefore sought precise, materially as well as formally correct. definitions for ‘truth’ and related notions which would serve as a hedge against paradox. This aspect of Tarski’s work is well known. In qModel Theory Before 1945'‘, Robert Vaugltt5 puts Tarski‘s enterprise in a slightly different light: Work in model theory ,’tl('I:/l(' I’III/(mt/)Iti(‘tII QmIrI(’rI_t' 70 ( I989) 34 I -363 0279-0750/89/0400-034 I -$02.80 Lopvright kt) I989 by University of Southern California
- Research Article
64
- 10.2178/jsl/1129642119
- Dec 1, 2005
- Journal of Symbolic Logic
We investigate logical consequence in temporal logics in terms of logical consecutions, i.e., inference rules. First, we discuss the question: what does it mean for a logical consecution to be ‘correct’ in a propositional logic. We consider both valid and admissible consecutions in linear temporal logics and discuss the distinction between these two notions. The linear temporal logicLDTL, consisting of all formulas valid in the frame 〈L≤, ≥〉 of all integer numbers, is the prime object of our investigation. We describe consecutions admissible inLDTLin a semantic way—via consecutions valid in special temporal Kripke/Hintikka models. Then we state that any temporal inference rule has a reduced normal form which is given in terms of uniform formulas of temporal degree 1. Using these facts and enhanced semantic techniques we construct an algorithm, which recognizes consecutions admissible inLDTL. Also, we note that using the same technique it follows that the linear temporal logicL(N) of all natural numbers is also decidable w.r.t. inference rules. So, we prove that both logicsLDTLandL(N) are decidable w.r.t. admissible consecutions. In particular, as a consequence, they both are decidable (known fact), and the given deciding algorithms are explicit.
- Book Chapter
- 10.1093/oso/9780195147209.003.0003
- May 31, 2001
If from Frege’s formal system presented in 1879 we excise quantification over functions, we obtain a set of formal axioms and rules of inference that is complete with respect to quantificational validity. In the opening paragraph of his 1930 (below, page 103) Gödel writes that, when a formal system is introduced, “the question at once arises whether the initially postulated system of axioms and principles of inference is complete, that is, whether it actually suffices for the derivation of every logico-mathematical proposition”. Frege, however, never saw completeness as a problem, and indeed almost fifty years elapsed between the publication of Frege 1879 and that of Hilbert and Ackermann 1928, where the question of the completeness of quantification theory was raised explicitly in print for the first time. Why? Because neither in the tradition in logic that stemmed from Frege through Russell and Whitehead, that is, logicism, nor in the tradition that stemmed from Boole through Peirce and Schroder, that is, algebra of logic, could the question of the completeness of a formal system arise.
- Research Article
4
- 10.1007/s11229-004-6270-y
- Feb 1, 2006
- Synthese
The proof-theoretic analysis of logical semantics undermines the received view of proof theory as being concerned with symbols devoid of meaning, and of model theory as the sole branch of logical theory entitled to access the realm of semantics. The basic tenet of proof-theoretic semantics is that meaning is given by some rules of proofs, in terms of which all logical laws can be justified and the notion of logical consequence explained. In this paper an attempt will be made to unravel some aspects of the issue and to show that this justification as it stands is untenable, for it relies on a formalistic conception of meaning and fails to recognise the fundamental distinction between semantic definitions and rules of inference. It is also briefly suggested that the profound connection between meaning and proofs should be approached by first reconsidering our very notion of proof.
- Research Article
2
- 10.1163/15685349-12341304
- Sep 16, 2015
- Vivarium
While agreeing with Professor D’Ors’ thesis that the notion of logical consequence cannot be exhaustively characterized (though not with his grounds for it), I depart from Professor d’Ors’ conclusion that the very notion of good consequence is primitive and can only be identified with the (incompletable) set of acceptable rules of inference, and from his conviction that modal notions such as necessity and impossibility are equivocal and gain such clarity as they have by their interaction with rules of inference. Inspired by this picture, Professor d’Ors undertook an examination of a number of medieval attempts to analyze the notion of consequence and tried to show how certain developments in the medieval history of logic made sense in the light of debate over such analyses. This paper examines a small fragment of Professor d’Ors programme and its relation to some aspects of Jean Buridan’s account of the consequence relation.
- Preprint Article
- 10.31234/osf.io/5mkt6_v1
- Aug 1, 2025
Stoicism was one of the major Hellenistic philosophies, renowned not only for its ethical teachings but also for pioneering work in logic and epistemology. The Stoic school, particularly under Chrysippus of Soli (c. 279–206 BCE), developed a formal propositional logic that in some ways foreshadowed aspects of modern logical systems. This paper provides a rigorous comparative examination of Stoic reasoning and logic vis-à-vis modern logic traditions. We focus on the structure of Stoic logic (including its syllogistic forms and inference rules; the Stoics’ treatment of logical paradoxes such as the Sorites (the “heap” paradox) and the Liar, and the philosophical presuppositions underlying Stoic logic (notably its integration with ontology and determinism). In contrast, modern logic—particularly as developed by Frege, Russell, and the logical positivists of the 20th century—emphasizes formal abstraction and symbolic rigor. The analysis highlights how Stoic logic, though formulated over two millennia ago using ordinary language, anticipates certain ideas found in modern symbolic logic, while nevertheless differing significantly in purpose and context. In doing so, we shall see how key Stoic logicians like Chrysippus innovated logical theory, and how these innovations compare to the work of modern figures such as Gottlob Frege, Bertrand Russell, and the logical positivist school.Throughout the paper, a formal academic tone is maintained. Where appropriate, we include footnote references for further explication or sources, and a complete bibliography is provided at the end. It should be noted that our knowledge of Stoic logic comes from fragmentary sources and later reports, since no complete Stoic logical texts survive from antiquity. Nevertheless, scholarship has reconstructed a coherent picture of Stoic logical theory which we draw upon. The goal is to illuminate both the continuities and divergences between the Stoic logical tradition and the modern logical paradigms that dominate contemporary philosophical logic.
- Research Article
- 10.1387/theoria.423
- Nov 25, 2008
- THEORIA
...
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.