To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The purpose of this paper is to study the effect of a change in an individual's degree of risk aversion on the perfect Bayesian Nash equilibrium in a simple model of bargaining. I find that, contrary to the results in the axiomatic model with riskless outcomes due to Nash, an opponent may be made worse off by such a change. Further, an individual may want to take an action that identifies him as more, rather than less, risk averse than he really is. In the course of the analysis, I fully characterize the equilibria of a class of “wars of attrition” with incomplete information, and single out one as “perfect” in a certain sense; this result may be of independent interest.
Introduction
The role of risk aversion in bargaining has been widely studied within the axiomatic framework of Nash (1950) (see, for example, Roth (1979), Perles and Maschler (1981)). It has been found that if the negotiation concerns riskless outcomes, then the more risk averse an individual is, the higher the payoff of his opponent. Related results show that in this case it is to the advantage of an individual to “pretend” to be less risk averse than he really is (Kurz (1977, 1980), Thomson (1979), Sobel (1981)). These results have some intuitive appeal: Given any (probabilistic) beliefs about the behavior of his opponent, it seems that an individual should behave more cautiously, the more risk averse he is.
This chapter discusses the role of a third party in settling disputes. A judge is responsible for deciding which of two bargainers should win a dispute. There is a social value associated with giving the disputed item to a particular bargainer. This is the bargainer's claim to the item. Each bargainer knows the strength of his claim and can provide evidence that proves his claim to the judge. Presenting proof is costly and distortion is impossible. However, it is possible to refuse to present evidence. The judge has a prior distribution about the strength of the claims, but does not know them exactly. The judge uses the information provided by the bargainers' disclosures (or decision not to provide evidence) and then rules in favor of the bargainer who has the best expected claim. When the bargainers must decide whether to provide evidence simultaneously, there are typically two types of equilibria. In one, each bargainer has a positive probability of winning if he does not provide evidence. In the other, one of the bargainers wins only if he proves his claim. Thus, rules about which bargainer has the burden of proof, that is, who must provide evidence in order to win the dispute, serve to select an equilibrium outcome. This discussion compares the welfare obtainable from different rules. The results are ambiguous.
This essay serves as an introduction to recent work on noncooperative game-theoretic models of two-player bargaining under incomplete information. The objective is to discuss some of the problems that motivated formulation of these models, as well as cover some of the issues that still need to be addressed. I have not set out to provide a detailed survey of all the existing models, and I have therefore discussed only certain specific aspects of the models that I believe to be especially important. The reader will find here, however, a guide to the relevant literature.
The title of this chapter was chosen to emphasize the phenomenon of disagreement in bargaining, which occurs almost as a natural consequence of rational behavior (i.e., equilibrium behavior) in some of these models and is difficult to explain on the basis of equilibrium behavior using the established framework of bargaining under complete information. Disagreement, of course, is only one reflection of the problem of inefficient bargaining processes. I also spend some time on the general question of efficiency and its attainment. Whereas in most models classical Pareto-efficiency is not attainable in equilibrium, it may be obtained by players who deviate from equilibrium, as will be shown.
The chapter is organized as follows. Section 2.2 lays out the problem and discusses the important modeling approaches available. Section 2.3 focuses on a particular group of models, each of which specifies a strategic (i.e., extensive) form of the bargaining process.
The axiomatic approach to bargaining may be viewed as an attempt to predict the outcome of a bargaining situation solely on the basis of the set of pairs of utilities that corresponds to the set of possible agreements and to the nonagreement point.
The strategic approach extends the description of a bargaining situation. The rules of bargaining are assumed to be exogenous, and the solution is a function not only of the possible agreements but also of the procedural rules and the parties' time preferences.
The aim of this chapter is to show that in the case of incomplete information about the time preferences of the parties, the bargaining solution depends on additional elements, namely, the players' methods of making inferences when they reach a node in the extensive form of the game that is off the equilibrium path.
The solution concept commonly used in the literature on sequential bargaining models with incomplete information is one of sequential equilibrium (see Kreps and Wilson (1982)). Essentially, this concept requires that the players' strategies remain best responses at every node of decision in the extensive form of the game, including nodes that are not expected to be reached. The test of whether a player's strategy is a best response depends on his updated estimation of the likelihood of the uncertain elements in the model. For nodes of the game tree that are reachable, it is plausible to assume that the players use the Bayesian formula.
In this paper, I consider an approach to modeling certain kinds of game situations that is somewhat different from the standard noncooperativegame approach. Roughly speaking, the situations have the following features in common: (1) a large number of players; (2) repeated partitioning of the player set over time into small, randomly selected groups; (3) gamelike interaction of the members of each group over a brief span of time; and (4) extensive knowledge by each player about the past history of actions taken by aggregates from the population, but limited information about the past history of actions taken by identifiable individuals in the populations. I have already applied this approach to an election model (Rosenthal (1982)) and, jointly with Henry Landau, to two bargaining models (Rosenthal and Landau (1979, 1981)). (In addition, Shefrin (1981) has worked on a related approach, with a view toward economic applications. An early version of this paper antedated and stimulated my work in this general area.) My goals in this chapter are to describe the approach (first spelled out in Rosenthal (1979)), its applicability, its advantages and disadvantages relative to alternative approaches, and also to discuss some side issues. In keeping with the spirit of this volume, however, I concentrate on the bargaining models.
Because the actions of individuals in the situations under consideration have little influence on the population aggregates, the approach assumes implicitly that individuals neglect this influence in making their decisions.
Most empirical analyses of discrete choice processes in economics utilize classes of models which are partially based on economic theory but which also contain disturbance terms and unobserved individual-specific sources of heterogeneity that are described by ad hoc parametric families of distributions. The primary focus of interest in such studies is usually the estimation and interpretation of structural parameters, some of which may be associated with unobserved variables. The fine-grained characteristics of error terms are not generally of central importance, and their principal role is to facilitate estimation of what are interpreted as structural parameters. Unobserved variables are incorporated in models of choice processes for at least two reasons: (1) to reduce bias in estimates of parameters associated with observed variables and (2) because some characteristic, such as the variance, of an unobserved variable plays a critical role in behavioral interpretations of a system of equations.
Strong a priori parametric assumptions about error distributions or the distributions of unobserved variables which are not grounded in economic theory or previous empirical studies can lead to a variety of incorrect conclusions. Among the more prominent negative consequences of this form of model misspecification are the following:
Estimates of structural parameters in a heterogeneous population model can be very sensitive to the choice of a distribution of unobserved variables (Heckman and Singer, 1982a,b). In particular, even the signs of key parameters can be a consequence of these choices.
Rejection of a class of heterogeneous population models based on a test where a “flexible” parametric family of distributions of an unobserved variable are imposed in the models can be an unnecessary erroneous conclusion. For example, mixtures of Bernoulli trials in which beta distributions represent the distribution of binary choices may fail to describe observed data on actual choices. Nevertheless, this data may be represented by some mixture of Bernoulli trials. This possibility is a consequence of the fact that beta mixtures of Bernoulli trials are not dense in the set of all such mixtures.
In Part I of our study we set out the structure of the general equilibrium model of the UK economy and tax/subsidy system that we have constructed. We also present the data used to calibrate the model and generate parameter values, along with a description of the methods used to obtain parameter values and solve the model for counter-factual equilibria.
In Chapter 1 we outline our approach to general equilibrium tax analysis stressing the basic analytical structure within which we work. In Chapter 2 we make this structure more concrete by specifying our model of the UK economy. We list the industry, commodity, and consumer groups we consider; and describe our functional forms, our treatment of government, external trade, savings and investment, and other features of the model. In Chapter 3 we discuss the UK tax/subsidy system, detailing its major distorting impacts and outlining our model treatment of each component of this system.
Chapter 4 describes the approach we have taken to calibration of the model in the generation of parameter values, and in Chapter 5 we give a complete description of the benchmark equilibrium data set which we have constructed for 1973 for use in calibration. Chapter 6 discusses the elasticity values which we have chosen in our functions. The appendices present an algebraic representation of the model, provide notes on data set construction, and report on our programming and computational experience.
This chapter discusses the selection of parameter values for the equations of our model. The approach followed is to use the equilibrium solution concept of the model and adopt a simple calibration procedure. This calculates parameter values consistent with an assumed equilibrium contained in observed data after adjustments are made to it to ensure all equilibrium conditions hold. We term this a ‘benchmark equilibrium’.
The size of the model and its integrated structure make it impossible to simultaneously estimate all parameter values using conventional simultaneous equation econometric techniques. The number of exogenous variables is small, and extensive use of excluded variables as identifying restrictions is not possible because of the general equilibrium interdependence which the model captures. If, as an alternative, single equation estimation is used, parameter estimates will be obtained which do not necessarily generate an equilibrium consistent with observed data. To achieve this consistency, parameter values for equations are calculated from observed data (after adjustments) using the equilibrium conditions of the model. We utilize this data set along with extraneous elasticity estimates required in our calibration procedure.
Since the data used must simultaneously satisfy all model equilibrium conditions, a large amount of work is involved in the construction of a consistent equilibrium data set. In addition, since this data set only yields observations on expenditures, a time dependent units convention is used to separate price and quantity observations.
A benchmark equilibrium data set for the UK economy and tax system for use in calibrating the model for the year 1973 has been constructed along the lines outlined in Chapter 4. In this chapter the main features of this data set are described.
A substantial volume of data has been drawn on from different sources and reorganized in a consistent manner for our use here. In the process a data base has been generated which has value outside of the immediate study, and so a substantial amount of detail is provided in the tables which follow to provide accessibility for other potential users of the data set.
A benchmark equilibrium data set, as described in Chapter 4, is an adjusted set of basic data which meets all of the equilibrium conditions of the model. These are that demands equal supplies for all goods; no industry makes any abnormal profit; external sector transactions balance; the government budget is balanced; and lastly, household incomes equal household expenditures.
Adjustments are necessary to basic data because the information available in the National Accounts and related sources is primarily macro and production oriented. Thus, in household expenditure data reporting expenditures and incomes of individual household types, the total sum of household factor incomes after scaling to an economy-wide basis will not match the sum of factor rewards reported by industry of origin in the National Accounts.
This appendix gives notes on the derivation of the data tables reported in Chapter 5. The notes are not meant to be comprehensive in reporting all the details involved but do aim to provide readers with the main sources used along with a brief outline of major adjustments.
Table 5.1
Summary Production and Demand Transactions, UK, 1973.
Definition of Terms
Profit Type Return: This includes the net of tax, gross of subsidy, and net of depreciation, returns on capital use. Major differences between the concept employed here and that used in the National Accounts include:
(i) the allocation of some self-employment income as a return to capital;
(ii) the subtraction of a portion of interest payments attributed to a charge for financial services;
(iii) the addition of hire and rental expenses;
(iv) the subtraction of corporation tax payments;
(v) the addition of capital type subsidy payments;
(vi) the subtraction of stock appreciation.
Labour Costs: These are the net of tax, gross of subsidy, returns to labour. Major differences between the concept employed here and that used in the National Accounts include:
(i) the allocation of some self-employment income as a return to labour;
(ii) the subtraction of National Insurance payments;
(iii) the addition of labour subsidies.
Net Capital Tax Payments: These estimates represent corporation tax payments and rates, less capital type subsidy payments.
Net Labour Tax Payments: These estimates represent National Insurance payments, less regional employment premiums.
The general equilibrium analysis of taxes and subsidies in the preceding chapter gives little indication as to how such a framework can be made operational. For a model of an actual economy to be used to analyze policy alternatives, a specific structure along with functional forms must be chosen, and parameter values selected. In this chapter the model of the UK is discussed. It is calibrated (or benchmarked) to a 1973 ‘equilibrium’ data set, the latest year available for much of the data at the time of its construction (1975-1977). The methods used to determine parameter values are discussed in Chapter 4.
The basic variant is a fixed factor supply, static general equilibrium model which abstracts from intertemporal tax distortions and distortions of labour supply. In Chapter 9 we describe a number of model extensions which analyze tax distortions of labour supply and savings. Chapter 9 also includes a model variant where government expenditures reflect household preferences towards public goods.
Results portray tax distortions of factor supplies as being relatively mild (in terms of distorting costs) compared to some of the commodity and industry tax distortions. Welfare loss estimates from savings distortions, however, are dependent on assumptions about inflation and savings elasticities, and in recent literature (such as Summers [1980]) large effects from this distortion were reported. The combination of convenience and model results provide the reasons for partitioning our model presentation in this way, with a basic structure presented here and the various model extensions later.
Despite the recent flowering of the literature concerned with methods to analyze human life history segments, comparatively little attention has been given to the particular problems of survey samples of such data. This chapter addresses selected issues of their analysis and provides solutions in a natural manner by combining elements of the relevant individual-based theory of stochastic processes with suitable parts of superpopulationist survey methodology. There are considerable divergences about some of these issues among current analysts, in particular about whether or when one should weight individual responses by means of reciprocal inclusion probabilities. There seems to be a standing dispute between those who would really like to see conventional weights applied in “most” circumstances and others who feel that the case for weighting is much weaker if the analyst wishes to use the data to estimate a properly specified model, since the model presumably “controls for” the effects of the factors which lead to the need for weights in the first place, except perhaps for particular dependent variables in the model. (Formulation essentially taken from PSID, 1983, p. A-13.) Still others may feel that weights are no advantage in model-based analyses.
A recurring theme in the literature on labor market structure is that different labor markets are characterized by different patterns of job mobility. For example, Doeringer and Piore (1971, p. 40) regard stability of employment as “the most salient feature of the internal labor market.” Kerr (1954, pp. 95-6) contrasts “structureless” markets that lack “barriers to the mobility of workers” with institutional markets in which entrance, movement, and exit are constrained by rules. Spilerman (1977) emphasizes career lines, noting how these may depend not only on personal characteristics but also on the occupation, industry, and firm of a person's port of entry.
Not everyone agrees that job shift patterns reflect differences in labor market structure. Some attribute these differences to various labor market imperfections: search costs (Oi, 1962), specific investments (Becker, 1964), uncertainty (Becker et al., 1977), and so forth. Others (e.g., Heckman and Willis, 1977; Doeringer and Piore, 1971, pp. 175-6) associate differences in job shift patterns with differences in workers: in nonmarket productivity, in preferences for leisure versus money and prestige, and so forth. Even those who attribute differences in job shift patterns to labor market structure do not agree on the boundaries of labor markets or on the reasons why occupants of certain kinds of jobs have similar job shift patterns.
An empirical regularity in most societies is that a young man's likelihood of holding a job increases with age. In 1980, for example, the U.S. Department of Labor classified as “employed” 39.7, 55.9, 70.0, 83.5, and 88.3 percent of men aged 16-17, 18-19, 20-24, 25-29, and 30-34, respectively (U.S. Department of Labor, 1981). Employment remains stable at between 85 and 90 percent for men through midlife and declines after 50 as retirements become prevalent. Although the level of employment varies with its precise definition and among demographic groups, rapidly rising employment with age among men under age 30 is a fundamental pattern.
The age pattern of employment among young men is important for understanding the transition from youth to adult. For men, employment is generally a prerequisite for moving from family of origin to establishment of a family of procreation. The age pattern of employment reflects this transition and concomitant age-related changes in school enrollment, living arrangements, financial dependence, and marital and fertility status.
Employment is also an important source of age variation in the distribution of social and economic welfare. It is a precondition of access to occupational status, earnings, and, for most men, general economic security, as well as a determinant of perceived self-worth (Cohn, 1978). Differential rates of employment, therefore, are one cause of economic inequality between the old and the young (Coleman et al., 1974; Winsborough, 1978).