Reasons, rationality, and opaque sweetening: Hare's “No Reason” argument for taking the sugar
Abstract Caspar Hare presents a compelling argument for “taking the sugar” in cases of opaque sweetening: you have no reason to take the unsweetened option, and you have some reason to take the sweetened one. I argue that this argument fails—there is a perfectly good sense in which you do have a reason to take the unsweetened option. I suggest a way to amend Hare's argument to overcome this objection. I then argue that, although the improved version fares better, there is still room to resist Hare's argument—in a way that raises interesting questions about rational agency. In short, rationality is not about doing what one has the most reason to do; rather, it is about aiming to do what there is most reason to do.
- Research Article
10
- 10.1016/j.engappai.2023.106478
- Jun 3, 2023
- Engineering Applications of Artificial Intelligence
Rational software agents with the BDI reasoning model for Cyber–Physical Systems
- Book Chapter
- 10.1093/oso/9780195125498.003.0001
- Jan 7, 1999
This collection began as a conference on Modeling Rational and Moral Agents that combined two themes. First is the problematic place of morality within the received theory of rational choice. Decision theory, game theory, and economics are unfriendly to crucial features of morality, such as commitment to promises. But since morally constrained agents seem to do better than rational agents - say by co-operating in social situations like the Prisoner’s Dilemma - it is difficult to dismiss them as simply irrational. The second theme is the use of modeling techniques. We model rational and moral agents because problems of decision and interaction are so complex that there is much to be learned even from idealized models. The two themes come together in the most obvious feature of the papers: the common use of games, like the Prisoner’s Dilemma (PD), to model social interactions that are problematic for morality and rationality.’
- Research Article
- 10.1086/700898
- Jan 1, 2019
- NBER Macroeconomics Annual
Comment
- Research Article
2
- 10.1016/j.dsp.2015.02.016
- Mar 2, 2015
- Digital Signal Processing
Social learning with heterogeneous agents and sequential decision making
- Research Article
- 10.3233/web-150327
- Nov 23, 2015
- Web Intelligence
Today’s complex online applications often require the interaction of multiple (web) services that belong to potentially different business entities. Interoperability is a core element of such an environment, yet not a straightforward one due to the lack of common data semantics. The problem is often approached by means of standardization procedures in a top-down manner with limited adoption in practice. (De facto) standards for semantic interoperability most commonly emerge in a bottom-up approach, i.e., involving the interaction and information exchange among self-interested industrial agents. In this paper, we argue that the emergence of semantic interoperability can be seen as an economic process among rational agents and, although interoperability can be mutually beneficial for the involved parties, it may also be costly and might fail to emerge. As a sample scenario, we consider the emergence of semantic interoperability among rational web service agents in service-oriented architectures (SOAs), and we analyze their individual economic incentives with respect to utility, risk and cost. We model this process as a positive-sum game and study its equilibrium and evolutionary dynamics. According to our analysis, which is also experimentally verified, certain conditions on the communication cost, the cost of technological adaptation, the expected mutual benefit from interoperability, as well as the expected loss from isolation, drive the process.
- Research Article
31
- 10.1016/j.jfi.2010.02.003
- Jun 23, 2004
- Journal of Financial Intermediation
Overconfidence and delegated portfolio management
- Research Article
48
- 10.1080/08839510290030408
- Aug 1, 2002
- Applied Artificial Intelligence
In this article, we expose some of the issues raised by the critics of the neoclassical approach to rational agent modeling and we propose a formal approach for the design of artificial rational agents that includes some of the functions of emotions found in the human system. We suggest that emotions and rationality are closely linked in the human mind (and in the body, for that matter) and, therefore, need to be included in architectures for designing rational artificial agents, whether these agents are to interact with humans, to model humans' behaviors and actions, or both. We describe an Affective Knowledge Representation (AKR) scheme to represent emotion schemata, which we developed to guide the design of a variety of socially intelligent artificial agents. Our approach focuses on the notion of "social expertise" of socially intelligent agents in terms of their external behavior and internal motivational goal-based abilities. AKR, which uses probabilistic frames, is derived from combining multiple emotion theories into a hierarchical model of affective phenomena useful for artificial agent design. AKR includes a taxonomy of affect, mood, emotion, and personality, and a framework for emotional state dynamics using probabilistic Markov Models.
- Research Article
- 10.1017/s095382080900346x
- Jun 1, 2009
- Utilitas
This article argues for a certain picture of the rational formation of conditional intentions, in particular deterrent intentions, that stands in sharp contrast to accounts on which rational agents are often not able to form such intentions because of what these enjoin should their conditions be realized. By considering the case of worthwhile but hard-to-form ‘non-apocalyptic’ deterrent intentions (the threat to leave a cheating partner, say), the article argues that rational agents may be able to form such intentions by first simulating psychological states in which they have successfully formed them and then bootstrapping themselves into actually forming them. The article also discusses certain limits imposed by this model. In particular, given the special nature of ‘apocalyptic’ deterrent intentions (e.g. the ones supposedly involved in nuclear deterrence), there is good reason to think that these must remain inaccessible to fully rational and moral agents.
- Conference Article
- 10.1109/cec.2011.40
- Sep 1, 2011
To days complex online applications often require the interaction of multiple services that potentially belong to different business entities. Interoperability is a core element of such an environment, yet not a straightforward one. In this paper, we argue that the emergence of interoperability is an economic process among rational agents and, although interoperability can be mutually beneficial for the involved parties, it is also costly and may fail to emerge. As a sample scenario, we consider the emergence of semantic interoperability among rational service agents in the service-oriented architectures (SOA) and analyze their individual economic incentives with respect to utility, risk and cost. We model this process as a positive-sum game and study its equilibrium and evolutionary dynamics. According to our analysis, which is also experimentally verified, certain conditions on the communication cost, the cost of technological adaptation, the expected mutual benefit from interoperability as well as the expected loss from isolation drive the process.
- Book Chapter
1
- 10.1007/978-3-540-75867-9_4
- Feb 12, 2007
Fuzzy logic breaks logic equivalence of statements such as (A∧B)∨(¬A∧B)∨(A∧¬B) and A∨B. It breaks the symmetry of use of such logically equivalent statements. There is a controversy about this property. It is called a paradox (Elkan's paradox) and interpreted as a logical weakness of fuzzy logic. In the opposite view, it is not a paradox but a fundamental postulate of fuzzy logic and one of the sources of its success in applications. There is no explanatory model to resolve this controversy. This paper provides such a model using a vector/matrix logic of rational and irrational agents that covers scalar classical and fuzzy logics. It is shown that the classical logic models rational agents, while fuzzy logic can model irrational agents. Rational agents do not break logic equivalence in contrast with irrational agents. We resolve the paradox by showing that the classical and fuzzy logics have different domains of rational and irrational agents.
- Research Article
5
- 10.3390/g7040039
- Dec 7, 2016
- Games
We introduce a model for studying the evolutionary dynamics of Poker. Notably, despite its wide diffusion and the raised scientific interest around it, Poker still represents an open challenge. Recent attempts for uncovering its real nature, based on statistical physics, showed that Poker in some conditions can be considered as a skill game. In addition, preliminary investigations reported a neat difference between tournaments and ‘cash game’ challenges, i.e., between the two main configurations for playing Poker. Notably, these previous models analyzed populations composed of rational and irrational agents, identifying in the former those that play Poker by using a mathematical strategy, while in the latter those playing randomly. Remarkably, tournaments require very few rational agents to make Poker a skill game, while ‘cash game’ may require several rational agents for not being classified as gambling. In addition, when the agent interactions are based on the ‘cash game’ configuration, the population shows an interesting bistable behavior that deserves further attention. In the proposed model, we aim to study the evolutionary dynamics of Poker by using the framework of Evolutionary Game Theory, in order to get further insights on its nature, and for better clarifying those points that remained open in the previous works (as the mentioned bistable behavior). In particular, we analyze the dynamics of an agent population composed of rational and irrational agents, that modify their behavior driven by two possible mechanisms: self-evaluation of the gained payoff, and social imitation. Results allow to identify a relation between the mechanisms for updating the agents’ behavior and the final equilibrium of the population. Moreover, the proposed model provides further details on the bistable behavior observed in the ‘cash game’ configuration.
- Research Article
3
- 10.1108/17563781111136676
- Jun 7, 2011
- International Journal of Intelligent Computing and Cybernetics
PurposeAs agent‐based systems are increasingly used to model real‐life applications such as the internet, electronic markets or disaster management scenarios, it is important to study the computational complexity of such usually combinatorial systems with respect to some desirable properties. The purpose of this paper is to consider two computational models: graphical games encoding the interactions between rational and selfish agents; and weighted directed acyclic graphs (DAG) for evaluating derivatives of numerical functions. The author studies the complexity of a certain number of search problems in both models.Design/methodology/approachThe author's approach is essentially theoretical, studying the problem of verifying game‐theoretic properties for graphical games representing interactions between self‐motivated and rational agents, as well as the problem of searching for an optimal elimination ordering in a weighted DAG for evaluating derivatives of functions represented by computer programs.FindingsA certain class of games has been identified for which Nash or Bayesian Nash equilibria can be verified in polynomial time; then, it has been shown that verifying a dominant strategy equilibrium is non‐deterministic polynomial (NP)‐complete even for normal form games. Finally, it has been shown that the optimal vertex elimination ordering for weighted DAGs is NP‐complete.Originality/valueThis paper presents a general framework for graphical games. The presented results are novel and illustrate how modeling real‐life scenarios involving intelligent agents can lead to computationally hard problems while showing interesting cases that are tractable.
- Book Chapter
1
- 10.1007/978-3-662-55665-8_37
- Jan 1, 2017
Richard Pettigrew [13, 14] defends the following theses: (1) epistemic disutility can be measured with strictly proper scoring rules (like the Brier score) and (2) at the beginning of their credal lives, rational agents ought to minimize their worst-case epistemic disutility (Minimax). This leads to a Principle of Indifference for ignorant agents. However, Pettigrew offers no argument in favour of Minimax, suggesting that the epistemic conservatism underlying it is a “normative bedrock.” Is there a way to test Minimax? In this paper, we argue that, since Pettigrew’s Minimax is impermissive, an argument against credence permissiveness constitutes an argument in favour of Minimax, and that arguments for credence permissiveness are arguments against Minimax.
- Book Chapter
- 10.4324/9780429202131-11
- Dec 7, 2021
This chapter considers whether animals are agents. It considers some arguments for thinking they could not be – together with a range of possible responses to those arguments, and also reasons for thinking, on the contrary, that it is much more plausible that animals should be accorded agential status. I argue, on the basis of these reasons, that theories of action which make it seem possible or probable that animals are not agents are likely to be mistaken. I then go on to consider the question which animals are agents and what criteria we might use to include or exclude particular categories of creature from the class. Finally, I turn to two questions about the kind of agency animals might be accorded –first, whether animals are rational agents; and second, the question whether they are morally responsible agents. Though there are interpretations of both these ideas according to which animals might meet the requisite criteria, I conclude that it may be less misleading to endorse the traditional view that animals are neither rational nor moral agents – provided their agency itself has been recognised and accounted for. And this recognition alone, I argue, should have consequences for the question what duties are owed to animals; I suggest that we may owe to agents a respect for their agency which creates a pro tanto obligation, in so far as it is in our power, to allow them to lead their lives in as natural a way as possible, unobstructed by our interference.
- Conference Article
12
- 10.1109/acc.2015.7171992
- Jul 1, 2015
This work investigates the case of a network of agents that attempt to learn some unknown state of the world amongst the finitely many possibilities. At each time step, agents all receive random, independently distributed private signals whose distributions are dependent on the unknown state of the world. However, it may be the case that some or any of the agents cannot distinguish between two or more of the possible states based only on their private observations, as when several states result in the same distribution of the private signals. In our model, the agents form some initial belief (probability distribution) about the unknown state and then refine their beliefs in accordance with their private observations, as well as the beliefs of their neighbors. An agent learns the unknown state when her belief converges to a point mass that is concentrated at the true state. A rational agent would use the Bayes' rule to incorporate her neighbors' beliefs and own private signals over time. While such repeated applications of the Bayes' rule in networks can become computationally intractable, in this paper, we show that in the canonical cases of directed star, circle or path networks and their combinations, one can derive a class of memoryless update rules that replicate that of a single Bayesian agent but replace the self beliefs with the beliefs of the neighbors. This way, one can realize an exponentially fast rate of learning similar to the case of Bayesian (fully rational) agents. The proposed rules are a special case of the Learning without Recall.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.