To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The getting of information begins with an act of response, and for good reasons, statisticians have long been concerned about problems of response to surveys and polls. The most elementary source of concern is simply that people may refuse to respond or may neglect to do so. In the early years of social research and political polling, the novelty and even gratification at being asked an opinion on some issue may well have been sufficient to induce participation. Such times are passing. Many authors (e.g., Hawkins, 1977; Brooks and Bailar, 1978; Martin, 1983) have noted the secular decline in the response rate to surveys. Hawkins notes that nonresponse rates, largely due to refusals, have increased secularly at annual rates approaching 1 percent. The most immediately recognizable consequence of a low response rate is a corresponding loss of precision with increased sampling variability of estimators and loss of degrees of freedom in hypothesis testing. We have already remarked on a further problem associated with a low response rate as such. Even if the sample size is nevertheless formally sufficient to accept or reject the appropriate null hypothesis at apparently reasonable levels of significance, the statistician may experience considerable difficulties in getting the conclusions accepted by his or her readership. The precise-difficulty may vary according to the survey subject and the readership. It may arise in connection with perceived problems of nonneutrality of a kind to be considered shortly.
In the last two chapters we have explored the nature of classic rationalexpectations equilibria and the kinds of circumstances in which such equilibria could possibly arise. As we have seen, the idea of an equilibrium characterized by agents acting according to mathematical expectations is at its most convincing in a world of many agents, each with an individually insignificant impact upon the outcome. To date, however, we have not explored a further question that has a bearing on the existence problem: From some initial starting point that may not be an equilibrium state, can the system find its way to a rational-expectations equilibrium? From the game-theoretic point of view this is the problem of solution by real-time (rather than, say, fictitious) play. In the economics literature, it is often referred to as a learning problem. For reasons to be outlined shortly, it is doubtful whether this usage is capable of representing the full complexity of the problem. However, if we agree to adopt a somewhat teleological viewpoint and ascribe learning to the system as a whole rather than to the individuals whose behavior drives the system, the usage refers to the way in which the system as a whole gropes its way toward a full rational-expectations equilibrium, in which every individual participant is in his or her personal state of rational expectations conditional upon the individual information sets. Of course, this assumes that the system is capable of doing this.
In this and the next chapter, we shall study different aspects of rational-expectations equilibria, a notion that lies at the heart of invariance phenomena in socioeconomic systems. The basic idea originated in economics with the work of Muth (1961) and is concerned with the way in which individuals form their expectations or predictions of future variables. One of the drawbacks of the several popular expectational schemes used in empirical work at the time was that predictions formed by using such schemes were in general biased. Muth was actually concerned to show that under certain circumstances one such scheme, namely, the adaptive expectations scheme, could result in unbiased forecasts if the parameter of this scheme was correctly chosen. Later authors, however, seized on and developed the methodology of Muth's paper, dispensing altogether with adaptive or other simple expectational schemes. Such methods could be used to develop forecasting formulas that were inherently model based and unbiased. In this way, an awkward and unappealing implication of the mechanistic schemes could be avoided: For if such schemes were recognized to be biased, it would surely pay individuals to improve things or even to take private advantage of the bias and in doing so change the way in which in the aggregate expectations were formed.
We should be a little clearer about the invariance aspect of such equilibria. Individuals are assumed to form subjective probability distributions of future variables and to formulate decisions based upn those distributions.
If we were better at forecasting human affairs than we are, world fairs would never go bankrupt, political pollsters would never get egg on their faces, and the perennial snake-oil merchants of economic and financial forecasting would become only a curiosum of economic history. Although the consequences may be less dramatic than failures in prediction, inference in the economic sciences has been dogged by the disquieting shadow of indefiniteness and more recently by dispiriting claims of outright failure: After half a century of intensive effort we have learned from economic data less than what we might have expected and far less than what we should have liked. With respect to another discipline Whyte (1969) remarks that the eminent sociologist Louis Wirth “used to terrify Ph.D candidates by requiring them to name one proposition that has been reasonably well supported by research data,” a state of affairs that according to Phillips (1973) has not changed much in more recent times. In this book, we shall be concentrating on just one of a multitude of possible reasons why things have gone wrong. From the methodological point of view, however, we think that it is important. It may be summarized thus: In our adaptations of statistical methods from the natural or engineering sciences, we have tended to forget an important difference, namely, that our sample space is cognitive and that as statisticians we are cognate participants. Knowledge of human affairs is neither established nor disseminated in a vacuum.
Some of the fringe benefits of experience are memory and the opportunity to indulge that faculty in writing the preface to a book of this nature. A few decades ago the hope was that social scientists would, by their mastery of statistical methodology, succeed in laying bare the facts and forces that drive social systems. The economists set the pace, for these were the years of the great macroeconometric models – in the later years of their evolution, gargantuan structures with hundreds of equations tended by a small army of priests and acolytes. We had high ambitions in those days, even if reality all too often had to be uncomfortably bought off. Thus the aim was to produce an explanation (in, e.g., a regression context) in which all systematic influences were to be accounted for and the residual to be unstructured white noise; but if the latter were not immediately available, one simply transformed the equation to get it, invoking ritual incantations of habit formation, partial adjustment mechanisms, and the like. Later it was held to be unrealistic to attempt to capture every possible systematic influence. Perhaps serial correlation or heteroscedasticity in our residuals might after all be allowable if one recognized it, hopefully could justify it, and certainly could design one's regression methodology to cope with it.
Throughout this study we have stressed the idea that the statistician, whether he (or she) is a dominant player or one insignificant player among many, is part of the system that he is studying; that his is a view from within. In this, the last chapter, we shall review some of the implications of this principle as we have established them in previous chapters and take the opportunity to add a little here and there in the interests of rounding off. Section 8.2 is concerned with the statistical problems arising in the identification of equilibrium structures. The existence of phases of disequilibrium, perhaps during learning or temporary breakdowns of cooperative behavior, will result in argument instability, which implies that the specified disturbances (in, say, a regression context) will not exhibit invariant or stable behavior. The most that one can then hope for is the applicability of conclusions based upon large-sample theory. In such circumstances, the statistical profession's passion for small-sample (or “exact”) results may be misplaced. Section 8.3 looks at the problem of structure, reviewing the conclusions that we have arrived at concerning the existence of a stable invariant structure and the pitfalls of a naive positivist methodology in hoping to identify such a structure if it does exist. Section 8.4 turns to the problem of observer-dependent systems, both in the structural aspect (as in rational-expectations models with heterogeneous information) and in the dominant-player mode, the characteristic modus operandi of the professional statistician.
Tests for misspecification play an important role in the evaluation of econometric models, and the need for such tests has been stressed by several authors. For example, Hendry (1980, p. 403) suggests that the three golden rules of econometrics are 'test, test and test,' and Malinvaud (1981, p. 1370) argues that the research aims of econometricians should include the special requirement that more emphasis be put on the testing of the specification. One purpose of this chapter is to consider some basic issues in testing for specification error. These issues, which are discussed in Sections 1.2 and 1.3, include the arguments for and against employing an alternative specification when testing the model under scrutiny, the interpretation of test statistics and their potential value in respecification following the rejection of a tentatively entertained model.
Some of the available statistical procedures and the relationships between them are also discussed at a rather general level in preparation for the detailed examination of tests of particular interest to economists and econometricians which appears in subsequent chapters. A large part of this discussion is presented in Section 1.4, which focuses on the likelihood ratio, Wald and Lagrange multiplier tests (sometimes referred to collectively as the trinity of classical tests). The emphasis on these three tests is justified in part by the fact that they are widely used in econometrics. Also a detailed discussion of these procedures makes it simpler to examine related tests that are potentially useful, but less familiar to applied workers; several such procedures are considered in Section 1.5.
The LM principle deserves special consideration when discussing tests for misspecification because, unlike the asymptotically equivalent W and LR methods, it does not require the estimation of the more complex alternative in which the original model of interest has been embedded. The purpose of this chapter is to provide a detailed analysis of LM tests in the context of detecting specification errors and deciding how to respond to significant evidence of model inadequacy.
In Section 3.2, it is shown that several alternatives can lead to the same value of the LM statistic for a given null specification. Consequently, only a class of alternative hypotheses need be selected in order to determine the form of the LM statistic. The members of such a class will, however, correspond to quite different types of specification. It is, therefore, natural to ask whether the LM test based upon the selection of the correct class of alternative hypotheses is inferior to the LR and W tests when the latter tests are derived using the correct member of this class. This issue is examined, and some Monte Carlo evidence on the relative performance of the LM test is summarized.
Models in which the dependent variable is qualitative or has its range limited in some way are being increasingly used in applied work. Amemiya (1981, 1984) and Maddala (1983) have given detailed accounts of estimation techniques for such models, but unfortunately few results on misspecification tests were available when these authors published their works. This chapter presents checks of model adequacy for some well-known and widely used specifications: the logit and probit models of binary choice and the Tobit (censored regression) model.
Classical likelihood-based procedures and Hausman-type tests are considered in the following sections. Section 6.2 deals with tests for binary choice models. Checks for limited dependent variable (LDV) models are discussed in Section 6.3. In both of these sections, the effects of certain misspecifications are summarized and the corresponding test statistics are described. The test procedures considered are all based upon asymptotic theory and, where possible, reference is made to Monte Carlo evidence on small sample behaviour. Section 6.4 contains some concluding remarks.
Chapters 1 and 3 were devoted to fairly general discussions of test procedures. This chapter is concerned with the application of these methods to the problem of evaluating the adequacy of regression models. The number of possible misspecifications that could be made when formulating a regression model is very large, but most of those usually considered fall into one of the following categories:
(i) omitted variables (OV)
(ii) incorrect functional form
(iii) autocorrelation
(iv) heteroscedasticity
(v) lack of regression parameter constancy
(vi) non-normality of disturbances
(vii) invalid assumptions about the exogeneity of one or more regressors.
Although much of this chapter is taken up with a detailed account of parametric tests designed for alternative hypotheses included in (i) to (vii), some results on pure significance checks for regression models are also considered. Indeed, one point made is that such general checks can often be interpreted as classical tests for some specific specification error. Appropriate tests are discussed for each of the problems considered, and where possible a unifying theme (often based upon the LM principle) is provided, along with a summary of relevant Monte Carlo evidence.
Chapter 1 contained an examination of the relationships between the likelihood ratio (LR), Wald (W) and Lagrange multiplier (LM) tests in the context of fairly general statistical models. There is, however, a special case of econometric interest which merits attention because it leads to the systematic numerical inequality
W ≥ LR ≥ LM (1.1)
among the sample values of the test statistics
This remarkable inequality is satisfied when testing linear restrictions on the parameters of a classical regression model with normally distributed errors, and its implications clearly deserve careful consideration. For example, if W, LR and LM are all compared to a common asymptotically valid critical value, then there is the possibility of conflict among the outcomes of the three asymptotically equivalent tests.
This chapter proceeds as follows. In Section 2.2, it is shown that the three classical procedures lead to the same test statistic if the covariance matrix of the disturbances is known. While such knowledge is rarely available, the analysis of this simple case provides a useful stepping stone towards more realistic models and also highlights the effects of having to estimate the parameters of the error covariance matrix. These effects are examined in Section 2.3 and the inequality (1.1) is established, with the case of independent and homoscedastic disturbances being singled out for special comment. The implications of the inequality are discussed in Section 2.4.
Hausman (1983) has suggested that the simultaneous equation model is perhaps the most remarkable development in econometrics. In this chapter we shall be concerned with the problem of testing the specification of such models.
YB + Z Γ = U
where Y is the n by m matrix of endogenous variables, Z is the n by k matrix of predetermined variables, U is the n by m matrix of stochastic disturbances, B is the m by m matrix of structural coefficients of endogenous variables and Γ is the k by m matrix of structural coefficients of predetermined variables.
The special case in which B is a diagonal matrix and (1.1) represents a system of seemingly unrelated regression equations (SURE) will not be given separate consideration. The corresponding simplifications of tests derived for the general case are straightforward. The tests discussed below do not, however, include a check of the assumption that the disturbances of the system are contemporaneously uncorrelated. This assumption may be of interest in the context of SURE models, and Breusch and Pagan (1980, p. 247) derive an appropriate LM statistic.
In a duopoly model where firms have private information about an uncertain linear demand, it is shown that if the goods are substitutes (not) to share information is a dominant strategy for each firm in Bertrand (Cournot) competition. If the goods are complements the result is reversed. Furthermore the following welfare results are obtained:
(i) With substitutes in Cournot equilibrium the market outcome is never optimal with respect to information sharing but it may be optimal in Bertrand competition if the products are good substitutes. With complements the market outcome is always optimal.
(ii) Bertrand competition is more efficient than Cournot competition.
(iii) The private value of information to the firms is always positive but the social value of information is positive in Cournot and negative in Bertrand competition. Journal of Economic Literature Classification Numbers: 022, 026, 611.
Consider a symmetric differentiated duopoly model in which firms have private market data about the uncertain demand. We analyze two types of duopoly information equilibrium, Cournot and Bertrand, which emerge, respectively, from quantity and price competition, and show that the incentives for information sharing and its welfare consequences depend crucially on the type of competition, the nature of the goods (substitutes or complements), and the degree of product differentiation.
The demand structure is linear and symmetric, and allows the goods to be substitutes, independent or complements.
Cournot's genius must give new mental activity to everyone who passes through his hands.
– Alfred Marshall Preface to First Edition of Principles of Economics
The analysis of strategic choice by noncooperative agents has come a long way since Augustin Cournot first developed a model of such behavior in his 1838 work, Researches into the Mathematical Principles of the Theory of Wealth. The current volume is a celebration of the publication of Cournot's model of multiagent behavior and an examination of its relevance and importance to economic theory and analysis 150 years after its first appearance. The contents of this volume will encompass both old and new. It includes contributions by Cournot, Bertrand, and Nash as well as recent papers that focus on the properties and uses of Cournot's model of competition among the few. These papers reflect a revival of interest in Cournot's model due largely to increased emphasis by economists on capturing elements of imperfect competition and strategic behavior (for a sample of other recent articles, see the Extended Bibliography at the end of this chapter). This expansion of interest is not limited to microeconomics; recent work in macroeconomics has also started to feature imperfect competition as an integral part of the analysis. The reason for this renaissance is clear: Cournot developed the basic model of noncooperative behavior by agents and it is to variations on this model that economists turn when imperfect competition is analyzed.
Von Neumann and Morgenstern have developed a very fruitful theory of two-person zero-sum games in their book Theory of Games and Economic Behavior. This book also contains a theory of n-person games of a type which we would call cooperative. This theory is based on an analysis of the interrelationships of the various coalitions which can be formed by the players of the game.
Our theory, in contradistinction, is based on the absence of coalitions in that it is assumed that each participant acts independently, without collaboration or communication with any of the others.
The notion of an equilibrium point is the basic ingredient in our theory. This notion yields a generalization of the concept of the solution of a two-person zero-sum game. It turns out that the set of equilibrium points of a two-person zero-sum game is simply the set of all pairs of opposing “good strategies.”
In the immediately following sections we shall define equilibrium points and prove that a finite non-cooperative game always has at least one equilibrium point. We shall also introduce the notions of solvability and strong solvability of a non-cooperative game and prove a theorem on the geometrical structure of the set of equilibrium points of a solvable game.
As an example of the application of our theory we include a solution of a simplified three-person poker game.
Fifty years ago, in an address to the Cournot Memorial session of the Econometric society, A. J. Nichol observed that if ever there was an apt illustration of Carnegie's dictum that “It does not pay to pioneer,” then Cournot's life and work would be it. His work was essentially ignored (especially by his countrymen) for many years. What survives in most economists' minds today is Cournot's model of duopoly. And this, too, if one consults the treatment in most current microeconomics texts, seems to linger on as an image of the past, a traditional topic for inclusion in a chapter on imperfect competition, sandwiched somewhere between monopoly and the bibliography, or neatly tucked away as an example of an application of game theory. So why dust off this musty topic now?
A peek at the extended bibliography and the papers in this volume should make the reason for reconsidering Cournot clear: There has been a veritable explosion of Cournot-based models of strategic behavior over the last two decades, and the end is not in sight. In recognition of this, this volume is a celebration of the publication of Augustin Cournot's model of noncooperative behavior and an examination of its relevance and importance to economic theory and analysis 150 years after its first appearance. The introduction examines the Cournot model and its relationship to many of the classical and recent analyses of market behavior.