Kratzerian ‘want’, decision theory, and upward entailment
Abstract Kyle Blumberg has recently argued that (i) the ideal worlds account of desire — according to which for S to want p is for all of S's top-ranked worlds to be p-worlds — has difficulties accounting for certain cases involving the ascribee's ignorance. He takes these cases to be (ii) a reason to disprefer the Kratzerian account of `want' to its rivals, and (iii) to doubt that desire ascriptions are upward entailing. I challenge all three claims. Along the way, I motivate and develop a Kratzer-style account, according to which what a subject wants depends not just on their information and preferences, but also on the decision rule they embrace and the salient decision problem.
- Research Article
20
- 10.1093/jee/79.6.1421
- Dec 1, 1986
- Journal of Economic Entomology
Pest control specialists construct and use decision rules to guide pest control actions. These decision rules usually consist of a sampling protocol for use in estimating or classifying pest densities and guidelines for actions based on these estimates. The bases for these guidelines are economic or action thresholds. The behavior of a pest control decision rule is rarely appraised in any formal manner. Decision theory is an avenue for appraising and comparing decision rules and a method whereby sampling intensity and guidelines for taking pest control actions can be considered jointly. The value of sample information, from the perspective of improved pest management, is calculated using decision theory. This value is used as an objective function for decision rule formulation and appraisal. Methods for calculating the value of sample information are presented and illustrations are provided.
- Book Chapter
19
- 10.1016/b978-1-4831-8253-7.50005-4
- Jan 1, 1967
- Mathematical Statistics
CHAPTER 1 - Game Theory and Decision Theory
- Research Article
5
- 10.1016/j.jtbi.2015.07.031
- Aug 11, 2015
- Journal of Theoretical Biology
On the Bayesness, minimaxity and admissibility of point estimators of allelic frequencies
- Research Article
45
- 10.1215/00318108-2009-024
- Dec 9, 2009
- The Philosophical Review
It is a platitude among decision theorists that agents should choose their actions so as to maximize expected value. But exactly how to define expected value is contentious. Evidential decision theory (henceforth EDT), causal decision theory (henceforth CDT), and a theory proposed by Ralph Wedgwood that this essay will call benchmark theory (BT) all advise agents to maximize different types of expected value. Consequently, their verdicts sometimes conflict. In certain famous cases of conflict—medical Newcomb problems—CDT and BT seem to get things right, while EDT seems to get things wrong. In other cases of conflict, including some recent examples suggested by Andy Egan, EDT and BT seem to get things right, while CDT seems to get things wrong. In still other cases, EDT and CDT seems to get things right, while BT gets things wrong.It's no accident, this essay claims, that all three decision theories are subject to counterexamples. Decision rules can be reinterpreted as voting rules, where the voters are the agent's possible future selves. The problematic examples have the structure of voting paradoxes. Just as voting paradoxes show that no voting rule can do everything we want, decision-theoretic paradoxes show that no decision rule can do everything we want. Luckily, the so-called “tickle defense” establishes that EDT, CDT, and BT will do everything we want in a wide range of situations. Most decision situations, this essay argues, are analogues of voting situations in which the voters unanimously adopt the same set of preferences. In such situations, all plausible voting rules and all plausible decision rules agree.
- Book Chapter
8
- 10.1007/978-3-319-06257-0_16
- Jan 1, 2014
Making the right decisions in time is one of the key tasks in every business. In this context, decision theory fosters decision-making based on well-defined decision rules. The latter evaluate a given set of input parameters and utilize evidenced data in order to determine an optimal alternative out of a given set of choices. In particular, decision rules are relevant in the context business processes as well. Contemporary process modeling languages, however, have not incorporated decision theory yet, but mainly consider rather simple, guard-based decisions that refer to process-relevant data. To remedy this drawback, this paper introduces an approach that allows embedding decision problems in business process models and applying decision rules to deal with them. As a major benefit, it becomes possible to automatically determine optimal execution paths during run time.
- Research Article
3
- 10.5282/ubm/epub.74222
- Nov 17, 2020
Kruschke [2018] proposes the so called HDI+ROPE decision rule about accepting or rejecting a parameter null value for practical purposes using a region of practical equivalence (ROPE) around the null value and the posterior highest density interval (HDI) in the context of Bayesian statistics. Further, he mentions the so called ROPE-only decision rule within his supplementary material, which is based on ROPE, but uses the full posterior information instead of the HDI. Of course, if it is about formalizing and guiding decisions then statistical decision theory is the framework to rely on, and this technical report elaborates the decision theoretic foundations of both decision rules. It appears that the foundation of the HDI+ROPE decision rule is rather artificial compared to the foundation of the ROPE-only decision rule, such that latter might be characterized as being closer to the underlying practical purpose than former. Still, the ROPE-only decision rule employs a truly arbitrary, and thus debatable, choice of loss values.
- Book Chapter
25
- 10.1093/acprof:oso/9780198717928.003.0003
- Jun 23, 2016
This essay makes the case for, in the phrase of Angelika Kratzer, packing the fruits of the study of rational decision-making into our semantics for deontic modals—specifically, for parametrizing the truth-condition of a deontic modal to things like decision problems and decision theories (and ultimately also things like moral and epistemological views). Then it knocks it down. While the fundamental relation of the semantic theory must relate deontic modals to things like decision problems and theories, this semantic relation cannot be intelligibly understood as representing the conditions under which a deontic modal is true. Rather it represents the conditions under which it is accepted by a semantically competent agent. This in turn motivates a reorientation of the whole of semantic theorizing, away from the truth-conditional paradigm, toward a form of Expressivism.
- Book Chapter
- 10.1017/cbo9780511800917.003
- May 14, 2009
Before you make a decision you have to determine what to decide about. Or, to put it differently, you have to specify what the relevant acts, states and outcomes are. Suppose, for instance, that you are thinking of taking out fire insurance on your home. Perhaps it costs $100 to take out insurance on a house worth $100,000 and you ask, “Is it worth it?” Before you decide, you have to get the formalization of the decision problem right. In this case, it seems that you face a decision problem with two acts, two states, and four outcomes. It is helpful to visualize this information in a decision matrix; see Table 2.1. To model one's decision problem in a formal representation is essential in decision theory, because decision rules are only defined relative to a formal representation. For example, it makes no sense to say that the principle of maximizing expected value recommends one act rather than another unless there is a formal listing of the available acts, the possible states of the world and the corresponding outcomes. However, instead of visualizing information in a decision matrix it is sometimes more convenient to use a decision tree. The decision tree in Figure 2.1 is equivalent to the matrix in Table 2.1. The square represents a choice node , and the circles represent chance nodes . At the choice node the decision maker decides whether to go up or down in the tree. If there are more than two acts to choose from, we simply adds more lines. At the chance nodes nature decides which line to follow. The rightmost boxes represent the possible outcomes. Decision trees are often used for representing sequential decisions, i.e. decisions that are divided into several separate steps. (Example: In a restaurant, you can either order all three courses before you start to eat, or divide the decision-making process into three separate decisions taken at three points in time. If you opt for the latter approach, you face a sequential decision problem.) To represent a sequential decision problem in a tree, one simply adds new choice and chance nodes to the right of the existing leaves.
- Book Chapter
2
- 10.1007/978-3-319-32283-4_11
- Aug 12, 2016
We draw together methodologies from game theory, agent based modelling, decision theory, and uncertainty analysis to explore the process of decision making in the context of pregnant women disclosing their drinking behaviour to their midwives. We employ a game theoretic framework to define a signalling game. The game represents a scenario where pregnant women decide the extent to which they disclose their drinking behaviours to their midwives, and midwives employ the information provided to decide whether to refer their patients for costly specialist treatment. This game is then recast as two games played against “nature”, to permit the use of a decision theoretic approach where both classes of agent use simple rules to decide their moves. Four decision rules are explored – a lexicographic heuristic which considers only the link between moves and payoffs, a Bayesian risk minimisation agent that uses the same information, a more complex Bayesian risk minimiser with full access to the structure of the decision problem, and a Cumulative Prospect Theory (CPT) rule. In simulation, we recreate two key qualitative trends described in the midwifery literature for all the decision models, and investigate the impact of introducing a simple form of social learning within agent groups. Finally a global sensitivity analysis using Gaussian Emulation Machines (GEMs) is conducted, to compare the response surfaces of the different decision rules in the game.
- Research Article
167
- 10.1037/0096-3445.121.2.177
- Jan 1, 1992
- Journal of Experimental Psychology: General
This article describes a general model of decision rule learning, the rule competition model, composed of 2 parts: an adaptive network model that describes how individuals learn to predict the payoffs produced by applying each decision rule for any given situation and a hill-climbing model that describes how individuals learn to fine tune each rule by adjusting its parameters. The model was tested and compared with other models in 3 experiments on probabilistic categorization. The first experiment was designed to test the adaptive network model using a probability learning task, the second was designed to test the parameter search process using a criterion learning task, and the third was designed to test both parts of the model simultaneously by using a task that required learning both category rules and cutoff criteria. Probabilistic categorization is an important class of decision problems in which stimuli are sampled from a number of categories and the decision maker must decide the category from which each stimulus was sampled. Payoffs depend on both the true category membership and the decision maker's response for each stimulus. Examples are found in all areas of psychology: In perception, auditory or visual stimuli are categorized as signal or noise, and in memory recognition, verbal items are categorized as old or new. In cognition, exemplar patterns are assigned to conceptual categories, and in industrial psychology, job applicants are categorized as acceptable or unacceptable. Finally, in clinical psychology, patient symptom patterns are assigned to disease categories. For the past 35 years, the general theory of signal detection (Peterson, Birdsall, & Fox, 1954) has served as the most prominent model of probabilistic categorization. It has been successfully applied to all of the areas of psychology mentioned (see Green & Swets, 1966; for perception; Bernbach, 1967, and Wickelgren & Norman, 1966, for memory recognition; Ashby & Gott, 1988, for conceptual categorization; Cronbach & Gleser, 1965, for industrial psychology; and Swets & Pickett, 1982, for medical diagnosis). The core idea is that (a) each stimulus is represented as a point within a multidimensional stimulus space, (b) this multidimensional space is partitioned into response regions, and (c) a stimulus is categorized according to the region within which it lies. Simple decision rules are normally used to describe how the stimulus space is partitioned.' For example, unidimensional stimuli can be divided into two categories by either a cutoff rule (all points above a cutoff go into one category) or by an interval rule (all points inside an interval go into one category). Two-dimensional stimuli can be partitioned into
- Conference Article
11
- 10.1109/cvprw.2019.00180
- Jun 1, 2019
Neural networks for semantic segmentation can be seen as statistical models that provide for each pixel of one image a probability distribution on predefined classes. The predicted class is then usually obtained by the maximum a-posteriori probability (MAP) which is known as Bayes rule in decision theory. From decision theory we also know that the Bayes rule is optimal regarding the simple symmetric cost function. Therefore, it weights each type of confusion between two different classes equally, e.g., given images of urban street scenes there is no distinction in the cost function if the network confuses a person with a street or a building with a tree. Intuitively, there might be confusions of classes that are more important to avoid than others. In this work, we want to raise awareness of the possibility of explicitly defining confusion costs and the associated ethical difficulties if it comes down to providing numbers. We define two cost functions from different extreme perspectives, an egoistic and an altruistic one, and show how safety relevant quantities like precision / recall and (segment-wise) false positive / negative rate change when interpolating between MAP, egoistic and altruistic decision rules.
- Research Article
5
- 10.2307/41164608
- Jul 1, 1975
- California Management Review
The tremendous increase in international business over the past decade has required a substantial number of corporations to deal with the risks arising from fluctuations in currency values. To do this, corporate treasurers have developed a number of decision rules to guide them in hedging against these risks. Based on the author9s experience with many financial officers involved in conducting these activities, several major alternative decision rules have been identified. Using decision theory as a reference point, this article discusses these practices in terms of their benefits and costs to the firm. A description of how many of the practical problems can be handled with decision theory and some suggested changes in corporate control procedures in this area are provided.
- Book Chapter
- 10.1007/978-3-642-47027-1_23
- Jan 1, 1998
This contribution proposes the notions of information structure and information value as a tool for judging on the rationality of decision rules in decision theory under uncertainty. Traditionally, sets of axioms which consider the ranking of alternatives or the choice set are used for this purpose. But some intuitively unfavourable criteria are not rejected decidedly enough with the help of the known axioms only, as for instance the maximax-rule. With the concept of information resistance, here newly introduced in the discussion, so called “global decision rules” can be criticized in an adequate manner.
- Research Article
73
- 10.1093/bjps/44.2.357
- Jun 1, 1993
- The British Journal for the Philosophy of Science
The purpose of this paper is to fill in what appears to be a missing gap in the decision theory literature. The missing gap concerns Keynes's 'conventional coefficient of risk and weight' model, which he developed in Chapter 26 of A Treatise on Probability (TP) and applied informally in The General Theory (GT) in Chapters 12-13, 17 and 22 (see Brady [1983]). This paper will be organized as follows. First, a review of the decision theory literature covering the assessments made by philosophers, psychologists, and economists, with respect to Keynes's approach, will be presented. Second, the 'nuts and bolts' of Keynes's decision rule will be examined and its operational capability demonstrated. Third, Keynes's decision rule will be used to solve the Popper and Ellsberg paradoxes, as well as some other selected problems taken from various contributions to decision theory. A final section will incorporate my conclusions.
- Book Chapter
2
- 10.1007/978-94-009-7772-3_33
- Jan 1, 1982
An exciting one and a half hour discussion followed the opening remarks made by G.T. Toussaint in which he put forth the following motion: “We should stop doing research on statistical decision theory and divert our energy to the design of efficient algorithms.” This motion was followed by a short historical account of the development of pattern recognition in order to argue in favour of the motion. The central idea was that with the crude approach to classification taken in the mid-50s, decision theory introduced in a seminal . paper by C.K. Chow [1] was a most welcome and needed tool. It led us to understand that any decision rule can perform well for some underlying distribution. Anyone familiar with “real world” problems realises that Gaussian distributions are not easy to come across and thus non-parametric or distribution-free procedures were what we really needed. The next historical mile stone was the paper by T. Cover and P. Hart [2] on nearest neighbour (NN) decision rules. They showed under mild conditions on the underlying distributions that the asymptotic error rate of the NN-rule is never more than twice the Bayes error. Here at last was the tool we always needed. Simple to understand and to program, essentially distribution-free, and powerful in terms of performance. Applied researchers however soon blasted the NN-rule with two criticisms, abandoned it, and joined in the search for new and better rules. The criticism centered on the following two claims with regard to computation and storage.
- Research Article
- 10.1093/analys/anaf080
- Nov 18, 2025
- Analysis
- Research Article
- 10.1093/analys/anaf083
- Nov 12, 2025
- Analysis
- Research Article
- 10.1093/analys/anaf092
- Nov 6, 2025
- Analysis
- Research Article
- 10.1093/analys/anaf093
- Nov 6, 2025
- Analysis
- Addendum
- 10.1093/analys/anaf079
- Nov 4, 2025
- Analysis
- Research Article
- 10.1515/anly-2025-0066
- Oct 28, 2025
- Analysis
- Research Article
- 10.1093/analys/anaf086
- Oct 8, 2025
- Analysis
- Research Article
- 10.1093/analys/anaf076
- Oct 8, 2025
- Analysis
- Research Article
- 10.1093/analys/anaf085
- Oct 8, 2025
- Analysis
- Research Article
- 10.1093/analys/anaf056
- Oct 6, 2025
- Analysis
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.