To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A scenario of general economic equilibrium may be articulated according to various degrees of detail and complexity. The essential feature of a general equilibrium, however, is the existence of economic agents gifted with endowments and the will to appear on the market either as consumers or as producers in order to take advantage of economic opportunities expressed in the form of offers to either buy or sell.
In this chapter, we discuss a series of general equilibrium models characterized by an increasing degree of articulation and generality. The minimal requirement for a genuine general equilibrium model is the presence of demand and supply functions for some of the commodities.
Model 1: Final Commodities
A general market equilibrium requires consumers and producers. We assume that consumers have already maximized their utility function subject to their budget constraints and have expressed their decisions by means of an aggregate set of demand functions for final commodities. On the producers' side, the industry is atomistic in the sense that there are many producers of final commodities, each of whom cannot affect the overall market behavior with his decisions. It is the typical environment of a perfectly competitive industry.
Hence, consider the following scenario. There exists a set of inverse demand functions for final commodities expressed by p = c – Dx, where D is a symmetric positive semidefinite matrix of dimension (n × n), p is an n-vector of prices, c is an n-vector of intercept coefficients, and x is an n-vector of quantities.
Game theory deals with conflicts of interest between (among) persons. A game is a situation of conflict in which two or more persons interact by choosing an admissible set of actions while knowing the reward associated with each action. The persons who interact are called players, the set of actions are called strategies, and the rewards are called payoffs. Hence, a game is a set of rules describing all the possible actions available to each player in correspondence with the associated payoff. In a game, it is assumed that each player will attempt to optimize his/her (expected) payoff.
In this chapter, we discuss two categories of games that involve two players, player 1 and player 2. The first category includes zero-sum games, in which the total payoff awarded the two players is equal to zero. In other words, the “gain” of one player is equal to the “loss” of the other player. This type of games assumes the structure of a dual pair of linear programming problems. The second category includes games for which the total payoff is not equal to zero and each player may have a positive payoff. This type of games requires the structure of a linear complementarity problem in what is called a bimatrix game.
The notion of strategy is fundamental in game theory. A strategy is the specification of all possible actions that a player can take for each move of the other player.
Drawing together data relevant to different parts of a complex model is a challenge for a number of reasons. First even if the prior density has a simple and interpretable form before any sampling has taken place, sampling may well introduce dependences across large sections of the model. If this happens then the salient features needed for inference can become much more difficult to calculate and this can be critical. Even more of a problem is when the values of certain variables remain unsampled.
However sometimes this is not the case. It is not unusual for the DM to be able to assume that different functions of the data sets she has at hand inform only certain factors in the credence decomposition she chooses. However the circumstances when such assumptions are transparent – or failing that plausible – are closely linked to how sampling schemes, observational studies and experiments are designed. In the last chapter we focused on decision models that could be structured round a BN. We showed how the decomposition of a problem into smaller explanatory components not only made a dependence structure more explicit but also provided a framework for the fast propagation of evidence using local structure in the large joint probability space. Now hierarchical models have been a bedrock of Bayesian modelling for some time and these are usually expressible as a BN.
In Chapter 2 we discussed the decision tree which extended the semantics of an event tree so that the full decision problem could be expressed. In this section we discuss how a similar extension can be made to the semantics of a BN. This diagram I is called an influence diagram (ID) and is very useful for representing a decision problem and for providing a framework to discover optimal decision rules.
An influence diagram cannot be effectively used to represent all decision problems since they depend to a significant degree on a certain type of symmetry being present. However the conditions in which they are a useful tool are met quite often in practice and Gomez (2004) catalogues over 250 documented practical applications of the framework before 2003.
Unlike the decision tree whose topology represents relationships between events and particular decisions taken, the influence diagram represents the existence of relationships between random variables – represented by ∘ vertices, decision spaces – denoted by □ vertices and a utility variable – denoted by a ◊ vertex. When appropriate they have many advantages over the decision tree. First they are usually much simpler to draw. Second like the BN they represent qualitative dependences exhibited by a problem and so the structure they express can be quite general, transparent and easy to elicit early in an analysis. Third we have seen how useful and intrinsic the conditional independence relationships between variables can be and the influence diagram expresses these directly through its topology.
In the last section we considered how domain knowledge could be expressed probabilistically to provide the basis for coherent acts. We now turn our attention to how the Bayesian DM can draw into her analyses evidence from other sources and so make it more compelling both to herself and an external auditor.
In many situations factual evidence in the form of data can be collected and drawn on to support the DM's inferences. We have seen several examples already in this book where such evidence might be available. It is important to accommodate such information on two counts. First by using such supporting evidence the DM herself will have more confidence in her probability statements and will be able to explain herself better. We saw in the previous chapter that probabilities can rarely be elicited with total accuracy. By refining these judgements and incorporating evidence from data whose sample distribution can be treated as known the DM can help improve her judgement and minimise unintentional biases she introduces. Second if she supports her judgements by accommodating evidence from well designed experiments and sample surveys, generally accepted as genuinely related to the case in hand, then this will often make her stated inferences more compelling. Although expert judgements about the probability of an event or the distribution of a random variable are often open to question and psychological biases, it is usually possible to treat data from a well-designed experiment as facts agreed by the DM and any auditor.
So far this book has given a systematic methodology that can be used to address and solve some simple decision problems. However some of the most interesting and challenging real decision problems can have many facets. It is therefore necessary to extend the Bayesian methodology described earlier in the book so that it is a genuinely operational tool for addressing the types of to complex decision problems regularly encountered. Even for moderately sized problems we have seen the advantages of disaggregating a problem into smaller components and then using the rules of probability and expectation within a suitable qualitative framework to draw the different features of a problem into a coherent whole. Although the appropriate decomposition to use depends on the problem addressed there are nevertheless some well-studied decomposition methods that are appropriate for a wide range of decision problem which the analyst is likely to encounter frequently. The remainder of this book will focus on the justification, description and enaction of some of these different methodologies.
When addressing the formal development of simpler models we began by developing a methodology constructing a justifiable articulation and quantification of a DM's preferences. In particular in Chapter 3 a formal rationale was developed describing when and why a DM should be guided into choosing a utility maximising decision rule. But techniques are needed to apply these methods effectively when the vector of attributes of the DM's utility function is moderately large.
The results and analyses in this book have demonstrated the following points:
A Bayesian decision analysis delivers a subjective but defensible representation of a problem that guides wise decision making, and provides a compelling supporting narrative for why the chosen action was taken. By crystallising the reasons behind a chosen action it can be used as a platform for new creative innovative thinking about the problem at hand and so is always open to re-evaluation and reformulation.
A DM can use the framework above to address not only simple decision problems but also highly structured, high-dimensional multifaceted problems.
The DM will usually need guidance to identify both the structure of her utility function and an appropriate credence decomposition over the features of the problem she believes might influence her decision. We have seen that it is extremely helpful if these elicitation processes are supported by graphs. Detailed discussions of several of these have been given above but there are many more. Graphs are important because they can not only describe evocatively consensual thinking about underlying processes but also provide a conduit into faithful and computationally feasible probabilistic models.
The quantification of a decision model's utility functions and probabilities will usually be the most contentious and most difficult features to elicit faithfully. However if the underlying credence decomposition and the structure of the utility function have been faithfully elicited then analyses are usually surprisingly robust to moderate mis-specifaction of these functions.
So far we have taken the concept of a subjective probability as a given. But what exactly should someone mean by a quoted probability? There are three criteria that such a definition needs satisfy if we are not to subverting the term “probability” for another use:
(1) In circumstances when a probability value can be taken as “understood” by a typical rational and educated person our definition must correspond to this value.
(2) The definition of subjective “probability” on collections of events should satisfy the familiar rules of probability, at least for finite collections of events.
(3) The magnitude of a person's subjective probability of an event in a decision problem must genuinely link to her strength of belief that the event might occur. For consistency with the development given so far in this book it would be convenient if this strength of belief were measured in terms of the betting preferences of the owner of the probability judgement.
To satisfy the first point above recall that there are various scenarios where the assignment of probabilities to events are uncontentious to most rational people in this society. It is therefore reasonable to assume that the DM's subjective probabilities agree with such commonly held probabilities. For example most people would be happy to assign a probability of one half to the toss of a fair coin resulting in a head. Two slightly more general standard probabilistic scenarios where common agreement exists are as follows.
This book will assume that the reader has a familiarity with an undergraduate mathematical course covering discrete probability theory and a first statistics course including the study of inference for continuous random variables. I will also assume a knowledge of basic mathematical proof and notation.
All observable random variables, that is all random variables whose values could at some point in the future be discovered, will be denoted by an upper case Roman letter (e.g. X) and its corresponding value by a lower case letter (e.g. x). In Bayesian inference parameters – which are usually not directly observable – are also random variables. I will use the common abuse of notation here and denote both the random variable and its value by a lower case Greek letter (e.g. θ). This is not ideal but will allow me to reserve the upper case Greek symbols (e.g. Θ) for the range of values a parameter can take. All vectors will be row vectors and denoted by bold symbols and matrices by upper case Roman symbols. I will use = to symbolise a deduced equality and denote that a new quantity or variable is being defined as equal to something via the symbol ≜.
Bayesian decision analysis and the scope of this book
This book is about Bayesian decision analysis. Bayesian decision analysis seriously intersects with Bayesian inference but the two disciplines are distinct.
Some simple decision problems can be transparently solved using only descriptors like a decision table and some supplementary simple belief structure like naive Bayes model. However for most moderately sized problems the analyst will often discover that the explanation of the underlying process, the consequences and the space of possible decisions in a problem has a rich and sometimes complex structure. Whilst it is possible to follow an EMV strategy in such domains, the elicitation of the description of the whole decision problem is more hazardous. The challenge is therefore to have ways of encapsulating the problem that are transparent enough for DM, domain experts and auditors to check the faithfulness of the description of a problem but which can also be used as a framework for the calculations the DM needs to make to discover good and defensible policies.
One of the most established encompassing frameworks is a picture called a decision tree depicting, in an unambiguous way, an explanation of how events might unfold. Over the years historic trees have been used to convey the sorts of causal relationships which populate many scientific and social theories and hypotheses. These hypotheses about what might happen – represented by the root to leaf paths of the tree – describe graphically how one situation might lead to another and are often intrinsic to a DM's understanding of how she might influence events advantageously.