We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Solving a decision theory problem usually involves finding the actions, among a set of possible ones, which optimize the expected reward, while possibly accounting for the uncertainty of the environment. In this paper, we introduce the possibility to encode decision theory problems with Probabilistic Answer Set Programming under the credal semantics via decision atoms and utility attributes. To solve the task, we propose an algorithm based on three layers of Algebraic Model Counting, that we test on several synthetic datasets against an algorithm that adopts answer set enumeration. Empirical results show that our algorithm can manage non-trivial instances of programs in a reasonable amount of time.
Economists have long studied policy choice by social planners aiming to maximize population welfare. Whether performing theoretical studies or applied analyses, researchers have generally assumed that the planner knows enough about the choice environment to be able to determine an optimal action. However, the consequences of decisions are often highly uncertain. Discourse on Social Planning under Uncertainty addresses the failure of research to come to grips with this uncertainty. Combining research across three fields – welfare economics, decision theory, and econometrics – this impressive study offers a comprehensive treatment that fleshes out a 'worldview' and juxtaposes it with other viewpoints. Building on multiple case studies, ranging from medical treatment to climate policy, the book explains analytical methods and how to apply them, providing a foundation on which future interdisciplinary work can build.
In this chapter, we discuss the decision-theoretic framework of statistical estimation and introduce several important examples. Section 28.1 presents the basic elements of statistical experiment and statistical estimation. Section 28.3 introduces the Bayes risk (average-case) and the minimax risk (worst-case) as the respective fundamental limits of statistical estimation in Bayesian and frequentist settings, with the latter being our primary focus in this part. We discuss several versions of the minimax theorem (and prove a simple one) that equates the minimax risk with the worst-case Bayes risk. Two variants are introduced next that extend a basic statistical experiment to either large sample size or large dimension: Section 28.4 on independent observations and Section 28.5 on tensorization of experiments. Throughout this chapter the Gaussian location model (GLM), introduced in Section 28.2, serves as a running example, with different focus at different places (such as the role of loss functions, parameter spaces, low versus high dimensions, etc.). In Section 28.6, we discuss a key result known as Anderson’s lemma for determining the exact minimax risk of (unconstrained) GLM in any dimension for a broad class of loss functions, which provides a benchmark for various more general techniques introduced in later chapters.
A number of mathematical models for overcoming intransitive choice have been proposed and tested in the literature of decision theory. This article presents the development of a new stochastic choice model based on multidimensional scaling. This allows decision-makers to have multiple viewpoints, whereas current multidimensional scaling models are based on the assumption that a subject or group of subjects has only one viewpoint. The implication of our model is that subjects make an intransitive choice because they are able to shift their viewpoint. This paper also presents the maximum likelihood estimation of the proposed model, and reanalyzes Tversky’s gamble experiment data.
We investigate the implications of penalizing incorrect answers to multiple-choice tests, from the perspective of both test-takers and test-makers. To do so, we use a model that combines a well-known item response theory model with prospect theory (Kahneman and Tversky, Prospect theory: An analysis of decision under risk, Econometrica 47:263–91, 1979). Our results reveal that when test-takers are fully informed of the scoring rule, the use of any penalty has detrimental effects for both test-takers (they are always penalized in excess, particularly those who are risk averse and loss averse) and test-makers (the bias of the estimated scores, as well as the variance and skewness of their distribution, increase as a function of the severity of the penalty).
If a loss function is available specifying the social cost of an error of measurement in the score on a unidimensional test, an asymptotic method, based on item response theory, is developed for optimal test design for a specified target population of examinees. Since in the real world such loss functions are not available, it is more useful to reverse this process; thus a method is developed for finding the loss function for which a given test is an optimally designed test for the target population. An illustrative application is presented for one operational test.
A Bayesian approach for simultaneous optimization of test-based decisions is presented using the example of a selection decision for a treatment followed by a mastery decision. A distinction is made between weak and strong rules where, as opposed to strong rules, weak rules use prior test scores as collateral data. Conditions for monotonicity of optimal weak and strong rules are presented. It is shown that under mild conditions on the test score distributions and utility functions, weak rules are always compensatory by nature.
For assigning subjects to treatments the point of intersection of within-group regression lines is ordinarily used as the critical point. This decision rule is critized and, for several utility functions and any number of treatments, replaced by optimal monotone, nonrandomized (Bayes) rules. Both treatments with and without mastery scores are considered. Moreover, the effect of unreliable criterion scores on the optimal decision rule is examined, and it is illustrated how qualitative information can be combined with aptitude measurements to improve treatment assignment decisions. Although the models in this paper are presented with special reference to the aptitude-treatment interaction problem in education, it is indicated that they apply to a variety of situations in which subjects are assigned to treatments on the basis of some predictor score, as long as there are no allocation quota considerations.
This chapter deals with how microeconomics can provide insights into the key challenge that artificial intelligence (AI) scientists face. This challenge is to create intelligent, autonomous agents that can make rational decisions. In this challenge, they confront two questions: what decision theory to follow and how to implement it in AI systems. This chapter provides answers to these questions and makes three contributions. The first is to discuss how economic decision theory – expected utility theory (EUT) – can help AI systems with utility functions to deal with the problem of instrumental goals, the possibility of utility function instability, and coordination challenges in multiactor and human–agent collective settings. The second contribution is to show that using EUT restricts AI systems to narrow applications, which are “small worlds” where concerns about AI alignment may lose urgency and be better labeled as safety issues. The chapter’s third contribution points to several areas where economists may learn from AI scientists as they implement EUT.
Is Artificial Intelligence a more significant invention than electricity? Will it result in explosive economic growth and unimaginable wealth for all, or will it cause the extinction of all humans? Artificial Intelligence: Economic Perspectives and Models provides a sober analysis of these questions from an economics perspective. It argues that to better understand the impact of AI on economic outcomes, we must fundamentally change the way we think about AI in relation to models of economic growth. It describes the progress that has been made so far and offers two ways in which current modelling can be improved: firstly, to incorporate the nature of AI as providing abilities that complement and/or substitute for labour, and secondly, to consider demand-side constraints. Outlining the decision-theory basis of both AI and economics, this book shows how this, and the incorporation of AI into economic models, can provide useful tools for safe, human-centered AI.
In this paper, we show how to represent a non-Archimedean preference over a set of random quantities by a nonstandard utility function. Non-Archimedean preferences arise when some random quantities have no fair price. Two common situations give rise to non-Archimedean preferences: random quantities whose values must be greater than every real number, and strict preferences between random quantities that are deemed closer in value than every positive real number. We also show how to extend a non-Archimedean preference to a larger set of random quantities. The random quantities that we consider include real-valued random variables, horse lotteries, and acts in the theory of Savage. In addition, we weaken the state-independent utility assumptions made by the existing theories and give conditions under which the utility that represents preference is the expected value of a state-dependent utility with respect to a probability over states.
In Chapter 10 we discuss feedback and control as an advanced topic. We introduce how to use the measurement results to control the quantum system, via applying conditional unitary operator. A number of experimental systems are discussed, including active qubit phase stabilization, adaptive phase measurements, and continuous quantum error correction.
In this article, we re-examine Pascal's Mugging, and argue that it is a deeper problem than the St. Petersburg paradox. We offer a way out that is consistent with classical decision theory. Specifically, we propose a “many muggers” response analogous to the “many gods” objection to Pascal's Wager. When a very tiny probability of a great reward becomes a salient outcome of a choice, such as in the offer of the mugger, it can be discounted on the condition that there are many other symmetric, non-salient rewards that one may receive if one chooses otherwise.
Risk is inherent to many, if not all, transformative decisions. The risk of regret, of turning into a person you presently consider to be morally objectionable, or of value change are all risks of choosing to transform. This aspect of transformative decision-making has thus far been ignored, but carries important consequences to those wishing to defend decision theory from the challenge posed by transformative decision-making. I contend that a problem lies in a common method used to cardinalise utilities – the von Neumann and Morgenstern (vNM) method – which measures an agent's utility function over sure outcomes. I argue that the risks involved in transformative experiences are constitutively valuable, and hence their value cannot be accurately measured by the vNM method. In Section 1, I outline what transformative experiences are and the problem they pose to decision theory. In Section 2, I outline Pettigrew's (2019, Choosing for Changing Selves) decision-theoretic response, and in Section 3, I present the case for thinking that risks can carry value. In Section 4, I argue for the claim that at least some transformative experiences involve constitutive risk. I argue that this causes a problem for decision-theoretic responses within the vNM framework in Section 5.
This paper presents some impossibility results for certain views about what you should do when you are uncertain about which moral theory is true. I show that under reasonable and extremely minimal ways of defining what a moral theory is, it follows that the concept of expected moral choiceworthiness is undefined, and more generally that any theory of decision-making under moral uncertainty must generate pathological results.
Edited by
Jonathan Fuqua, Conception Seminary College, Missouri,John Greco, Georgetown University, Washington DC,Tyler McNabb, Saint Francis University, Pennsylvania
Traditional theistic arguments conclude that God exists. Pragmatic theistic arguments, by contrast, conclude that you ought to believe in God. The two most famous pragmatic theistic arguments are put forth by Blaise Pascal (1662) and William James (1896). Pragmatic arguments for theism can be summarized as follows: believing in God has significant benefits, and these benefits are not available for the unbeliever. Thus, you should believe in, or “wager on,” God. This chapter distinguishes between various kinds of theistic wagers, including finite vs. infinite wagers, premortem vs. postmortem wagers, and doxastic vs. acceptance wagers. Then, it turns to the epistemic–pragmatic distinction and discusses the nuances of James’ argument, and how views like epistemic permissivism and epistemic consequentialism provide unique “hybrid” wagers. Finally, it covers outstanding objections and responses.
The nature of evidence is a problem for epistemology, but I argue that this problem intersects with normative decision theory in a way that I think is underappreciated. Among some decision theorists, there is a presumption that one can always ignore the nature of evidence while theorizing about principles of rational choice. In slogan form: decision theory only cares about the credences agents actually have, not the credences they should have. I argue against this presumption. In particular, I argue that if evidence can be unspecific, then an alleged counterexample to causal decision theory fails. This implies that what theory of decision we think is true may depend on our opinions regarding the nature of evidence. Even when we are theorizing about subjective theories of rationality, we cannot set aside questions about the objective nature of evidence.
A wise decider D uses the contents of his mind fully, accurately and efficiently. D’s ideal decisions, i.e., those that best serve his interests, would be embedded in a comprehensive set of totally coherent judgments lodged in his mind. They would conform to the norms of statistical decision theory, which extracts quantitative judgments of fact and value from D’s mind contents and checks them for coherence. However, the most practical way for D to approximate his ideal may not be with models that embody those norms, i.e., with applied decision theory (ADT). In practice, ADT can represent only some of D’s judgments and those imperfectly. Quite different decision aid, including intuition, pattern recognition and cognitive vigilance (especially combined), typically outperform feasible ADT models—with some notable exceptions. However, decision theory training benefits D’s informal decisions. ADT, both formal and informal, should become increasingly useful and widespread, as technical, cultural and institutional impediments are overcome.
Encounters with art can change us in ways both big and small. This paper focuses on one of the more dramatic cases. I argue that works of art can inspire what L. A. Paul calls transformations, classic examples of which include getting married, having a child, and undergoing a religious conversion. Two features distinguish transformations from other changes we undergo. First, they involve the discovery of something new. Second, they result in a change in our core preferences. These two features make transformations hard to motivate. I argue, however, that art can help on both fronts. First, works of art can guide our attempt to imagine unfamiliar ways of living. Second, they can attract us to values we currently reject. I conclude by observing that what makes art powerful also makes it dangerous. Transformations are not always for the good, and art's ability to inspire them can be put to immoral ends.
Frames and framing make one dimension of a decision problem particularly salient. In the simplest case, frames prime responses (as in, e.g., the Asian disease paradigm, where the gain frame primes risk-aversion and the loss frame primes risk-seeking). But in more complicated situations frames can function reflectively, by making salient particular reason-giving aspects of a thing, outcome, or action. For Shakespeare's Macbeth, for example, his feudal commitments are salient in one frame, while downplayed in another in favor of his personal ambition. The role of frames in reasoning can give rise to rational framing effects. Macbeth can prefer fulfilling his feudal duty to murdering the king, while also preferring bravely taking the throne to fulfilling his feudal duty, knowing full well that bravely taking the throne just is murdering the king. Such patterns of quasi-cyclical preferences can be correct and appropriate from the normative perspective of how one ought to reason. The paper explores three less dramatic types of rational framing effects: (1) Consciously framing and reframing long-term goals and short-term temptations can be important tools for self-control. (2) In the prototypical social interactions modeled by game theory, allowing for rational framing effects solves longstanding problems, such as the equilibrium selection problem and explaining the appeal of non-equilibrium solutions (e.g., the cooperative solution in the Prisoner's Dilemma). (3) Processes for resolving interpersonal conflicts and breaking discursive deadlock, because they involve internalizing multiple and incompatible ways of framing actions and outcomes, in effect create rational framing effects.