Exploring Essential Occurrence

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

An expression occurs essentially in a formula (or sentence) when it occurs in every formula equivalent to the given formula, taking equivalence as logical equivalence relative to the logic in play in the discussion. Setting aside various niceties, this amounts to provable equivalence if that logic is presented via some proof system, and to valid equivalence if the salient characterization is couched in semantic terms. This notion of essential occurrence, or an informal analog thereof, has found its way into numerous philosophical discussions over the past seventy or more years, and here we tease out some issues of specifically logical interest it presents, stretching that description somewhat so as to subsume under it the frequently mooted connection between the essential occurrence of a singular term in a sentence and that sentence’s being genuinely about what the term denotes. This connection, stressed originally by Nelson Goodman, is touched on in several sections in the main body of the paper, but especially in §4, where it is contrasted with an alternative suggestion due to R. Demolombe and L. Fariñas del Cerro. Some issues raised by this and other parts of the discussion are also treated in several longer notes (referred to by means of letters A, B, . . . , K) which are postponed to an Appendix (§5) of roughly the same length as the main body of the paper. This enables readers with a special interest in one or more topics to consult them selectively, while allowing those with no such interest to avoid involvement with the further details supplied in the associated longer note(s).

Similar Papers
  • Conference Article
  • Cite Count Icon 14
  • 10.21437/interspeech.2011-278
Speech indexing using semantic context inference
  • Aug 27, 2011
  • Chien-Lin Huang + 3 more

This study presents a novel approach to spoken document retrieval based on semantic context inference for speech indexing. Each recognized term in a spoken document is mapped onto a semantic inference vector containing a bag of semantic terms through a semantic relation matrix. The semantic context inference vector is then constructed by summing up all the semantic inference vectors. Such a semantic term expansion and re-weighting make the semantic context inference vector a suitable representation for speech indexing. The experiments were conducted on 1550 anchor news stories collected from Mandarin Chinese broadcast news of 198 hours. The experimental results indicate that the proposed speech indexing using the semantic context inference contributes to a substantial performance improvement of spoken document retrieval. Index Terms : speech indexing, semantic context inference, spoken document retrieval 1. Introduction Speech is the most convenient way for the interaction of human-to-human and human-to-machine. The applications of spoken document retrieval in education, business and entertainment are rapidly growing. The recent attempts include multilingual oral history archives access [1], MIT lecture browsing [2], and the management of National Gallery consisting of speeches, news broadcasts and recordings [3], voice search about spoken dialog, call-routing systems [4], etc. All of them focus on retrieving the information to meet users' requirements. We know that it is not straightforward to directly compare the speech query with the spoken documents in the database. In order to construct an efficient and effective retrieval system, the state-of-the-art spoken document retrieval (SDR) technologies adopt the transcription obtained from automatic speech recognition for indexing. Vector space model [5] and probabilistic models (HMM [6], GMM [7], KL-divergence [8]), rely on certain similarity functions that assume a document is more likely to be relevant to a query if it contains more occurrences of query terms. The indexing techniques of text-based information retrieval have been widely adopted in spoken document retrieval. However, due to imperfect speech recognition results, out-of-vocabulary, and the ambiguity in homophone and word tokenization, conventional text-based indexing techniques are not always appropriate for spoken document retrieval. The transcription errors may cause undesired semantic and syntactic expression, thus result in an inadequate indexing. Several approaches have been proposed to address these problems with various indexing units such as word, sub-word, phone, and so on. The multi-level knowledge indexing approach considers three information sources including the speech transcription, keywords extracted from spoken documents, and hypernyms of the extracted keywords [9]. Hui et al. applied the

  • Research Article
  • 10.1075/ijcl.7.1.05eum
The contribution of verbal semantic content towards term recognition
  • Oct 18, 2002
  • International Journal of Corpus Linguistics
  • Eugenia Eumeridou

Automatic term recognition is a natural language processing technology which is gaining increasing prominence in our information-overloaded society. Apart from its use for quick and efficient updating of terminologies and thesauri, it has also been used for machine translation, information retrieval, document indexing and classification as well as content representation. Until very recently, term identification techniques rested solely on the mapping of term linguistic properties onto computational procedures. However, actual terminological practice has shown that context is also important for term identification and interpretation as terms may appear in different forms depending on the situation of use. The aim of this article is to show the importance of contextual information for automatic term recognition by exploiting the relation between verbal semantic content and term occurrence in three subcorpora drawn from the British National Corpus.

  • Book Chapter
  • Cite Count Icon 12
  • 10.1007/11548133_12
Using Proofs by Coinduction to Find “Traditional” Proofs
  • Jan 1, 2005
  • Clemens Grabmayer

In the specific situation of formal reasoning concerned with “regular expression equivalence” we address instances of more general questions such as: how can coinductive argumentation be formalised logically and be applied effectively, as well as how is it linked to traditional forms of proof. For statements expressing that two regular expressions are language equivalent, we demonstrate that proofs by coinduction can be formulated in a proof system based on equational logic, where effective proof-search is possible. And we describe a proof-theoretic method for translating derivations in this proof system into a “traditional” axiom system: namely, into a “reverse form” of the axiomatisation of “regular expression equivalence” due to Salomaa. Hereby we obtain a coinductive completeness proof for the traditional proof system.

  • Research Article
  • Cite Count Icon 2
  • 10.1007/bf02936099
Reduction of Hilbert-type proof systems to the if-then-else equational logic
  • Mar 1, 2004
  • Journal of Applied Mathematics and Computing
  • Joohee Jeong

We present a construction of the linear reduction of Hilbert type proof systems for propositional logic to if-then-else equational logic. This construction is an improvement over the same result found in [4] in the sense that the technique used in the construction can be extended to the linear reduction of first-order logic to if-then-else equational logic.

  • Book Chapter
  • 10.1093/acprof:oso/9780198240778.003.0011
Singular Terms in Belief and in Fictional Contexts
  • Feb 17, 1994
  • Kent Bach

One of the themes of this book is that the notion of denotation is tangential to the semantics of singular terms. This thesis has been supported by defending Russell's theory of descriptions, by developing a version of the description theory of names, and by arguing that pronouns do not denote, not even relatively to contexts of utterance. This chapter suggests that occurrences of singular terms in belief (or other attitude) contexts do not pose the problems that arise for those views that rely on the notion of denotation. The account of the occurrence of singular terms in such contexts will be rather straightforward. That will not prevent it from being controversial, however, for the distinction between referentially transparent and opaque occurrences will be interpreted pragmatically. The theoretical benefit of drawing this distinction at the level of speaker intention rather than of sentence grammar is that, contrary to popular opinion, belief sentences are not systematically ambiguous. As for occurrences of singular terms in fictional contexts, in the final section a pragmatic account will be given of them as well.

  • Research Article
  • Cite Count Icon 28
  • 10.1109/tse.1985.232484
Completeness of Proof Systems for Equational Specifications
  • May 1, 1985
  • IEEE Transactions on Software Engineering
  • D.B Macqueen + 1 more

Contrary to popular belief, equational logic with induction is not complete for initial models of equational specifications. Indeed, under some regimes (the Clear specification language and most other algebraic specification languages) no proof system exists which is complete even with respect to ground equations. A collection of known results is presented along with some new observations.

  • Book Chapter
  • Cite Count Icon 4
  • 10.1007/978-3-540-24698-5_56
A Proof System and a Decision Procedure for Equality Logic
  • Jan 1, 2004
  • Olga Tveretina + 1 more

Equality Logic with uninterpreted functions is used for proving the equivalense or refinement between systems (hardware verification, compiler translation, etc). Current approaches for deciding this type of formulas use a transformation of an equality formula to the propositional one of larger size, and then any standard SAT checker can be applied. We give an approach for deciding satisfiability of equality logic formulas (E-SAT) in conjunctive normal form. Central in our approach is a single proof rule called ER. For this single rule we prove soundness and completeness. Based on this rule we propose a complete procedure for E-SAT and prove its correctness. Applying our procedure on a variation of the pigeon hole formula yields a polynomial complexity contrary to earlier approaches to E-SAT.KeywordsEquality logicsatisfiabilityresolution

  • Book Chapter
  • Cite Count Icon 26
  • 10.1007/978-3-642-32759-9_32
From Hoare Logic to Matching Logic Reachability
  • Jan 1, 2012
  • Grigore Roşu + 1 more

Matching logic reachability has been recently proposed as an alternative program verification approach. Unlike Hoare logic, where one defines a language-specific proof system that needs to be proved sound for each language separately, matching logic reachability provides a language-independent and sound proof system that directly uses the trusted operational semantics of the language as axioms. Matching logic reachability thus has a clear practical advantage: it eliminates the need for an additional semantics of the same language in order to reason about programs, and implicitly eliminates the need for tedious soundness proofs. What is not clear, however, is whether matching logic reachability is as powerful as Hoare logic. This paper introduces a technique to mechanically translate Hoare logic proof derivations into equivalent matching logic reachability proof derivations. The presented technique has two consequences: first, it suggests that matching logic reachability has no theoretical limitation over Hoare logic; and second, it provides a new approach to prove Hoare logics sound.KeywordsTransition SystemOperational SemanticProof SystemReduction RuleProof RuleThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

  • Research Article
  • Cite Count Icon 5
  • 10.1111/1467-9973.00119
Style in Philosophy: Part I
  • Jul 1, 1999
  • Metaphilosophy
  • Manfred Frank

In this article, I attempt to restore the philosophical significance of that nonformalizable, noniterable, “singular’ element of natural language that I call “style.” I begin by critically addressing the exclusion of such instances of natural language by both semantics‐oriented logical analysis and a restricted variation of structuralist linguistics. Despite the obvious advantages – with regard to style – of ”pragmatic“approaches to language, such pragmatism merely returns to rule‐determination in the guise of “normativity.” Although style by definition resists any kind of rule‐determination – whether posed in terms of semantics or intersubjective regulations of speech‐acts – there can be no consideration of language that ignores the persistence of style in natural language. In terms of cognition, any discursive agent understands more than allowed by either semantics or speech‐act theory. I ascribe this element of excessive signification to the role of style. My principal thesis is twofold: (1) a hermeneutic approach (exemplified by Schleiermacher) to literature should reveal the heuristically decisive role played by style in philosophy; and, more radically still, (2) style, in fact, may be crucially determinative of philosophical discourse in general. I suggest that a closer scrutiny of the lesser‐known works of Ludwig Wittgenstein, conventionally regarded as having dreamt of a “philosophy without style,” may consolidate the restoration of style's philosophical import.

  • Research Article
  • Cite Count Icon 5
  • 10.1016/s1571-0661(04)80904-6
Deforestation, program transformation, and cut-elimination
  • May 1, 2001
  • Electronic Notes in Theoretical Computer Science
  • Robin Cockett

Deforestation, program transformation, and cut-elimination

  • Research Article
  • Cite Count Icon 5
  • 10.7146/brics.v2i22.19924
A Semantic Theory for Value–Passing Processes Late Approach Part II: A Behavioural Semantics and Full Abstractness
  • Jan 22, 1995
  • BRICS Report Series
  • Anna Ingólfsdóttir

This is the second of two companion papers on a semantic theory for communicating processes with values based on the late approach. In the first one, [Ing95], we explained the general idea of the late semantic approach. Furthermore we<br />introduced a general syntax for value-passing process algebra based on the late approach and a general class of denotational models for these languages in the Scott-Strachey style. Then we defined a concrete language, CCSL, which is<br />an extension of the standard CCS with values according to the late approach.<br />We also provided a denotational model for it, which is an instantiation of the general class. This model is a direct extension of the model given by Abramsky<br />[Abr91] to model the pure calculus SCCS. Furthermore we gave an axiomatic semantics by means of a proof system based on inequations and proved its soundness and completeness with respect to the denotational semantics.<br />In this paper we will give a behavioural semantics to the language CCSL<br />in terms of a Plotkin style operational semantics and a bisimulation based<br />preorder. Our main aim is to relate the behavioural view of processes we present here to the domain-theoretical one developed in the companion paper [Ing95]. In the Scott-Strachey approach an infinite process is obtained as a chain of finite and possibly partially specified processes. The completely unspecified process is given by the bottom element of the domain. An operational interpretation of this approach is to take divergence into account and give the behavioural<br />semantics in terms of a prebisimulation or bisimulation preorder [Hen81,Wal90] rather than by the standard bisimulation equivalence [Par81, Mil83].<br />One of the results in the pure case presented in [Abr91] is that the denotational<br />model given in that reference is fully abstract with respect to the "finitely<br />observable" part of the bisimulation preorder but not with respect to the bisimulation<br />preorder which turns out to be too fine. Intuitively this is due to the algebraicity of the model and the fact that the finite elements in the model<br />are denotable by syntactically finite terms. The algebraicity implies that the<br />denotational semantics of a process is completely decided by the semantics of<br />its syntactically finite approximations, whereas the same can not be said about the bisimulation preorder. In fact we need experiments of an infinite depth to investigate bisimulation while this is not the case for the preorder induced by the model as explained above. An obvious consequence of this observation is that in general, a bisimulation preorder can not be expected to be modeled by an algebraic cpo given that the compact elements are denotable by syntactically<br />nite elements.<br />In [Hen81] Hennessy defined a term model for SCCS. This model is !-<br />algebraic and fails to be fully abstract with respect to the strong bisimulation<br />preorder. In the same paper the author introduces the notion of "the finitary part of a relation" and "a finitary relation". The finitary part of a relation R over processes, denoted by RF , is defined by<br />pRF q i 8d:dRp) dRq where d ranges over the set of syntactically finite processes. A relation R is<br />finitary if RF = R. Intuitively this property may be interpreted as algebraicity<br />at the behavioural level provided that syntactically nite terms are interpreted<br />as compact elements in the denotational model; if a relation is nitary then it<br />is completely decided by the syntactically nite elements.<br />In both [Hen81] and [Abr91] the full abstractness of the respective denotational<br />semantics with respect to <<br />F is shown. In [Abr91] it is also shown that<br />if the language is sort nite and satises a kind of nite branching condition,<br />then <F=< !, where < ! is the strong bisimulation preorder induced by experiments<br />of nite depth, i.e. the preorder is obtained by iterated application of the<br />functional that denes the bisimulation. Note that in general the preorder < is<br />strictly ner than the preorder < !. However if the transition system is image<br />nite, i.e. if the number of arcs leading from a xed state and labelled with a<br />xed action is nite, then these two preorders coincide.<br />As mentioned above the main aim of this paper is to give a bisimulation<br />based behavioural semantics for our language CCSL from [Ing95]. To reflect the<br />late approach the operational semantics will be given in terms of an applicative<br />transition system, a concept that is a modication of that dened in [Abr90].<br />We generalize the notion of bisimulation [Par81, Mil83] to be applied to applicative<br />transition systems and introduce a preorder motivated by Abramsky's<br />applicative bisimulation [Abr90]. For this purpose we rst introduce the notion<br />of strong applicative prebisimulation and the corresponding strong applicative<br />bisimulation preorder. Following the standard practice this preorder is obtained<br />as the largest xed point of a suitably dened monotonic functional. We show<br />by an example that this preorder is not nitary in the sense described above<br />and is strictly ner than the preorder induced by the model.<br />Next we dene the strong applicative !-bisimulation preorder in the standard<br />way by iterative application of the functional that induces the bisimulation<br />preorder. This gives as a result a preorder which still is too ne to match thepreorder induced by the denotational model. This will be shown by an example.<br />Intuitively the reason for this is that we still need innite experiments to<br />decide the operational preorder, now because of an innite breadth due to the<br />possibility of an innite number of values that have to be checked.<br />Then we give a suitable denition of the notion of the \nitary part" of<br />the bisimulation preorder to meet the preorder induced by the denotational<br />model. We recall that in [Ing95] we dened the so-called compact terms as<br />the syntactically nite terms which only use a nite number of values in a nontrivial<br />way. We also showed that these terms correspond exactly to the compact<br />elements in the denotational model in the sense that an element in the model<br />is compact if and only if it can be denoted by a compact term. This motivates<br />a denition of the nitary part, <<br />F , of the bisimulation preorder < by<br />p <<br />F q i 8c: c < p ) c < q<br />where c ranges over the set of syntactically compact terms. We also dene<br />yet another preorder, <<br />f<br />!, a coarser version of < ! in which we only consider a<br />nite number of values at each level in the iterative denition of the preorder.<br />Here it is vital that the set of values is countable and can be enumerated as<br />V al = fv1; v2; g. Thus in the denition of <<br />f<br />1 we only test whether the<br />dening constraints of the preorder hold when the only possible input and<br />output value is v1, and in general in the denition of <<br />f<br />n we test the constraints<br />for the rst n values only. (Here we would like to point out that this idea<br />originally appears in [HP80].) It turns out that <<br />f<br />! is the nitary part of <<br />in our new sense and that the model is fully abstract with respect to <<br />f<br />!. We<br />will prove both these results in this paper using techniques which are similar<br />to those used by Hennessy in the above mentioned reference [Hen81].<br />The structure of the paper is as follows: In Section 2 we give a short survey of<br />the result from the companion paper [Ing95] needed in this study. The denition<br />of the operational semantics and the notion of applicative bisimulation are the<br />subject of Section 3. Section 4 is devoted to the analysis of the preorder and the<br />denition of the value-nitary preorder <<br />f<br />!. In Section 5 we give a denition of<br />the notion of nitary part of a relation and a nitary relation over processes. In<br />the same section we prove that the preorder <<br />f<br />! is nitary and that it coincides<br />with the nitary part of the preorder < . Finally we prove the soundness and<br />the completeness of the proof system with respect to the resulting preorder.<br />The full abstractness of the denotational semantics for CCSL, given in [Ing95],<br />then follows from the soundness and the completeness of the proof system with<br />respect to the denotational semantics. In Section 6 we give some concluding<br />remarks.

  • Research Article
  • Cite Count Icon 66
  • 10.1016/0304-3975(84)90112-9
Synchronization trees
  • Jan 1, 1984
  • Theoretical Computer Science
  • Glynn Winskel

Synchronization trees

  • Conference Article
  • 10.1109/icnc.2008.174
Probabilistic Modal Kleene Algebra and Hoare-Style Logic
  • Jan 1, 2008
  • Rui Qiao + 2 more

Modal Kleene algebras (MKA) formalize the behavior of regular programs. However, MKA is incapable of verifying regular programs with probabilistic information, which have richer and more powerful expressiveness than normal regular programs. We define an extension of MKA, called probabilistic modal Kleene algebra (PMKA) for verifying the regular programs with probability in a purely algebraic approach. We give relational semantics for the regular programs with probability. Then, we modify the existent probabilistic Hoare-style logic in some sort to a proof system named PHL <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">np</sub> for probabilistic regular programs without iteration, and prove the soundness of the modified system in terms of the relational semantics. At last, we show that PHL <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">np</sub> is subsumed by PMKA.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 4
  • 10.1007/s10992-020-09577-2
What is the Meaning of Proofs?
  • Oct 22, 2020
  • Journal of Philosophical Logic
  • Sara Ayhan

The origins of proof-theoretic semantics lie in the question of what constitutes the meaning of the logical connectives and its response: the rules of inference that govern the use of the connective. However, what if we go a step further and ask about the meaning of a proof as a whole? In this paper we address this question and lay out a framework to distinguish sense and denotation of proofs. Two questions are central here. First of all, if we have two (syntactically) different derivations, does this always lead to a difference, firstly, in sense, and secondly, in denotation? The other question is about the relation between different kinds of proof systems (here: natural deduction vs. sequent calculi) with respect to this distinction. Do the different forms of representing a proof necessarily correspond to a difference in how the inferential steps are given? In our framework it will be possible to identify denotation as well as sense of proofs not only within one proof system but also between different kinds of proof systems. Thus, we give an account to distinguish a mere syntactic divergence from a divergence in meaning and a divergence in meaning from a divergence of proof objects analogous to Frege’s distinction for singular terms and sentences.

  • Conference Article
  • Cite Count Icon 2
  • 10.29007/vr7n
The Potential of Interference-Based Proof Systems
  • Nov 8, 2017
  • Marijn Heule + 1 more

In our extended abstract, we try to motivate researchers to investigate the potential of proof systems that modify a given set of formulas (e.g., a set of clauses in propositional logic) in a way that preserves satisfiability but not necessarily logical equivalence. We call such modifications interferences, because theycan change the models of a given set of formulas.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.