Abstract
The maximum entropy principle is widely used to determine non-committal probabilities on a finite domain, subject to a set of constraints, but its application to continuous domains is notoriously problematic. This paper concerns an intermediate case, where the domain is a first-order predicate language. Two strategies have been put forward for applying the maximum entropy principle on such a domain: (i) applying it to finite sublanguages and taking the pointwise limit of the resulting probabilities as the size n of the sublanguage increases; (ii) selecting a probability function on the language as a whole whose entropy on finite sublanguages of size n is not dominated by that of any other probability function for sufficiently large n. The entropy-limit conjecture says that, where these two approaches yield determinate probabilities, the two methods yield the same probabilities. If this conjecture is found to be true, it would provide a boost to the project of seeking a single canonical inductive logic—a project which faltered when Carnap's attempts in this direction succeeded only in determining a continuum of inductive methods. The truth of the conjecture would also boost the project of providing a canonical characterisation of normal or default models of first-order theories.Hitherto, the entropy-limit conjecture has been verified for languages which contain only unary predicate symbols and also for the case in which the constraints can be captured by a categorical statement of Σ1 quantifier complexity. This paper shows that the entropy-limit conjecture also holds for categorical statements of Π1 complexity, for various non-categorical constraints, and in certain other general situations.
Highlights
Inductive logic seeks to determine how much certainty to attach to a conclusion proposition ψ, given premiss propositions φ1, . . . , φk to which attach measures of certainty X1, . . . , Xk respectively
The second is to consider inference processes other than the maximum entropy principle, which might be relevant to questions other than the search for a canonical inductive logic or a canonical characterisation of normal models
There are several examples of such inference processes that have been proposed and studied in the literature, for example Centre of Mass, Minimum Distance and the spectrum of inference processes based on generalised Rényi entropies [45]
Summary
Inductive logic seeks to determine how much certainty to attach to a conclusion proposition ψ, given premiss propositions φ1, . . . , φk to which attach measures of certainty X1, . . . , Xk respectively. Other approaches have considered the default model as the ‘average’ model and try to characterise this in terms of the distribution of models (see for example [5,6,18,19,20]) Another way that this question can be interpreted was posed in [39], and studied further in [40,42,43], as: Given a finite (consistent) set T of first-order axioms, from a language L and a structure M with domain {t1, t2, . This is referred to as the limiting centre of mass assignment (see for example [34,38,39]) Another approach, followed in [40,42,43], and with which we will be concerned here, characterises a default model as being maximally uninformative with respect to the sentences of the language not implied by T. The results of this paper are relevant to the characterisation of normal models
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.