To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Econometrics was born in the 1920s, conceived as the embodiment of a noble dream. Today, with the hindsights of more than half a century, this dream gives way to a scenario:
"Economics deals with complex interactive phenomena. It is impossible to study quantitative relationships between important variables (say, taxation vs. savings) without reference to the context. Nor is it possible to perform experiments or make direct observations that would isolate such relationships or at least diminish the noise level under which their effects can be observed. We possess, however, innumerable time series engendered by the very economic forces we wish to uncover. By constructing models for time series we may hope to gain indirect access to the desired quantitative economic relationships because these are intrinsic in the models and therefore should be recoverable from the structure and parameters of the models. Economic truths are immutable, at least in the short run, and thus there is no reason, in principle, why accurate models cannot be deduced from the available data, in spite of disturbances, errors, irrationalities, expectations, and other random influences which contaminate economic time series."
This scenario forces the conclusion that the study of economics is a system-theoretic endeavor. Economics as a science must be built on methods that are effective in the exploration and explanation of interactive phenomena. The dream of econometrics has evolved into a technical problem in system theory.
But, unfortunately, the actual evolution of econometrics took a rather different route. System theory was not even in sight in the 1920s and 1930s, and econometrics soon came to be dominated by statistics. The aspiration of Haavelmo (1944) to give a solid foundation to econometrics by dogmatic application of probability theory has not been fulfilled (in the writer’s opinion), no doubt because probability theory has nothing to say about the underlying system-theoretic problems.
This chapter concerns a line of research that begins with William Novshek's remarkable Ph.D. dissertation, and then extends his result on the existence of Cournot partial equilibrium with entry to the case of general economic equilibrium. I plan to write about the purpose of this research and the recent discoveries that have been made. In addition, I will remark upon the important work that lies ahead. I will give only a broad outline, leaving out either proofs or details. I will try to provide some historical perspective, but will make no attempt to touch as many bases as might be expected in a survey.
First, let me explain why I feel that the problems under consideration are central to the theory of value. Put briefly, the lesson of the analysis of perfect competition that has been carried out over the past 25 to 30 years is this: It is possible to formulate a rigorous model of economic equilibrium for economies in which: (a) markets exist for all commodities, and (b) every agent in sight takes prices as given. Furthermore, under some rather general conditions, each economy has at least one equilibrium state, and there is a close relationship between the set of Pareto optima for economies and the set of equilibria. To me, this means that a logical next step to take in order to deepen our understanding of the workings of perfect competition is to address the following two questions. Under conditions of laissez faire, and assuming that agents are intelligent strategists: (a) What markets will exist, and (b) When will agents behave as if prices are beyond their control? The research on which I will report represents an attempt to shed light on the second question.
In repeated games, crucial points in the modeling deal with information: To what extent do the players know exactly what game they are playing, including the others' utilities, and to what extent are they informed of all pure strategy choices after every stage of the game?
When such full information is not assumed, one is led to the so-called games with incomplete information:
There are several possible games (i.e., states of nature, or types of the several players), one of which is chosen in the beginning according to some known probability distribution. Players may have some private partial information about this choice, for instance, their own type.
After every stage of the game, every player gets some signal, which may depend on the state of nature, the actions of all players at this stage, and the player receiving it.
I will deal mainly with the two-player zero-sum case. The zero-sum case, among others, is needed to set the limitations in the general case - such as the minimum amount an individually rational player would accept, or, more generally, the characteristic function.
It is well known that the modern versions of Walrasian economics (Debreu (1959); Arrow and Hahn (1971)) leave unexplained a key ingredient of the theory, namely the hypothesis that prices are quoted and taken as given by economic agents. In this exposition we shall attempt, via the extensive analysis of two models, to give an account of the efforts of the last decade to develop the classical work of Cournot (1838) into a full-fledged general equilibrium theory that provides an endogenous explanation of price taking (we will say much less about price quoting). Specific references will be given as we go along. For a gathering of relevant articles see the issue of the Journal of Economic Theory (1980) on noncooperative approaches to the theory of perfect competition.
The starting point of the research is the (informal) hypothesis that economic agents interact noncooperatively through given institutions. Those being essential, it cannot be expected that the same level of institutional parsimoniousness as in Walrasian theory can be reached. Because, as a consequence, all-encompassing models are bound to be cumbersome, the research has proceeded by focusing on particular, prototypical ones. This we shall do also. We will review two models. The first (Section 2), in the line initiated by Shubik (1973) and Shapley and Shubik (1977), is a model of exchange where all agents are treated symmetrically, that is, have, in principle, the same strategic position. The second (Section 3), in the line initiated by Gabszewicz and Vial (1972), Hart (1974a), and Novshek and Sonnenschein (1978), is a model of firms that face a sector of passively adjusting consumers but interact strategically among themselves.
In this chapter we shall review two approaches to building macroeconomic models and their potential use in medium-term planning for a socialist economy. The first approach concerns a system of models built by Czerwiñski and associates at the Academy of Economics, Poznan, based on a Leontief-type dynamic multisectoral model extended by the use of demographic and econometric models of consumers' expenditures. It is aimed at simulating the planning procedures for building a 5-year plan. The second approach deals with macroeconometric W models built by Welfe and associates exemplified by the medium-size W-3 model of the Polish economy, which optionally includes a static input-output equation system. It is aimed not only at explaining the real flows in the medium term but also at simulating the alternative economic policies and revealing the imbalances of different sectors of the national economy.
A system of simulation for planning models
At the Institute of Economic Cybernetics of the Academy of Economics on Poznari, an attempt has been made to simulate the medium-term planning procedure at the central level of the economy by means of a series of interrelated models. The basic idea of the procedure is as follows.
A plan is understood to be a set of consistent paths of growth of selected variables describing the state of the economy, compatible with some accepted targets. Variables are divided into three groups: target, autonomous, and outcome variables. Autonomous paths of growth of the population and of its social and professional components are assumed. Increases of incomes in distinguished groups of the population (wages, salaries, pensions) within the planning horizon are postulated, and hence the path of growth of total income of the population, being one of the target variables, is derived. This path is split up into the path of growth of expenditures on selected groups of commodities and the path of savings. These paths are converted into the paths of growth of individual (consumption) demand for output of productive sectors.
The purpose of this chapter is to study the macroeconomic policy tradeoff between output and price stability confronting individual countries in an international economy linked by trade flows and a managed exchange-rate system. A tradeoff between output and price stability arises because of the tendency for wage and price decisions to be staggered over time. This prevents general price and wage levels in each country from adjusting quickly to changes in nominal variables - such as the money supply or the exchange rate - and creates an effect of these variables on real output and trade flows. Within each country, however, these real effects are tempered by expectations of future aggregate demand policy, both at home and abroad, and by exchange-rate policies generally. In this chapter we use the rational expectations approach to describe these expectations effects.
The policy implications of staggered wage setting and rational expectations have been studied in a closed-economy context by Taylor (1980a) and in a small open-economy context by Dornbusch (1980, Ch. 9). This chapter represents a multicountry extension of these earlier studies. Its main aim is the development of an applied econometric framework for evaluating aggregate demand policies and exchange-rate rules under rational expectations.
Within a multicountry framework, it is possible in principle to determine an optimal set of policy rules for the world economy, given a social welfare function that depends on the macroeconomic goals of price and output stability in each country. The specific form of the optimal policy rules will of course be conditioned on parameters that need to be estimated empirically. Under certain conditions, the optimal rules will entail an exchange-rate regime in which each country makes its own decision regarding monetary accommodations although at the same time maintaining external stability. That is, each country chooses a point on its domestic tradeoff between output and price stability according to its own preferences and economic structure and independently of the preferences and economic structures abroad. But under other conditions, some coordination of aggregate demand policies is needed, for it is difficult to achieve isolation of each country from the preferences and economic structure of others.
In many econometric inference problems there are a number of alternative statistical procedures available, all having the same asymptotic properties. Because the exact distributions are unknown, choice among the alternatives often is made on the basis of computational convenience. Recent work in theoretical statistics has suggested that second-order asymptotic approximations can lead to a more satisfactory basis for choice.
In this chapter I shall discuss some implications of this statistical theory for hypothesis testing in econometric models. My comments are based on current research in progress with my co-workers Chris Cavanagh, Larry Jones, Dennis Sheehan, and Darrell Turkington. This work, in turn, borrows much from earlier studies by the statisticians Chibisov, Efron, and Pfanzagl and the econometricians Durbin, Phillips, and Sargan. Although my comments will concentrate on the problem of hypothesis testing, there are, of course, parallel theories for point and interval estimation.
The basic idea underlying second-order asymptotic comparisons of tests is simple. Tests that are asymptotically equivalent often differ in finite samples. Although the exact sampling distributions may be difficult to derive, the first few terms of an Edgeworth-type series expansion for the distribution functions usually are available. The tests can then be compared using the Edgeworth approximations. Of course, it is not very interesting to compare tests unless they have the same significance level. Therefore, we first approximate the distributions of the test statistics under the null hypothesis and, based on that approximation, modify the tests so that they have the same probability of a type I error. Then we use another Edgeworth series, derived under the alternative hypothesis, to approximate the power functions of the modified tests.
It is generally accepted that the modern field of industrial organization began with the work of Edward Mason and others at Harvard in the 1930s. Lacking faith in the ability of available price theory to explain important aspects of industrial behavior, Mason called for detailed case studies of a wide variety of industries. It was hoped that relatively simple generalizations useful for antitrust policy, among other applications, would emerge from a sufficient number of careful studies. Perhaps because such generalizations were not actually uncovered very rapidly by case analysis, or perhaps because of easier access to data and computers, the case study approach was generally abandoned by the early 1960s. Most students of industrial organization followed Joe Bain (1951, 1956) and turned instead to cross-section studies, electing “to treat much of the rich detail as random noise, and to evaluate hypotheses by statistical tests of an interfirm or interindustry nature.” The need to describe each firm or industry in the sample by a small number of more or less readily available measures effectively limited consideration to relatively simple hypotheses not involving “the rich detail” so important to students of particular industries. Thus, the standard regression equation in this literature specified some measure of profitability as a linear function of a concentration ratio and, usually, other similar variables. Bain's (1959, 1968) text, which dominated the U.S. market during the 1960s, similarly focused on simply-stated qualitative generalizations and contained almost no formal theory.
After the seminal contributions of Clower (1965) and Leijonhufvud (1968), there has been a considerable renewal of interest in non-Walrasian economics as a way to provide rigorous microfoundations to macroeconomics. The basic idea behind all the models in this area is that prices may not clear the markets at all times, and thus that adjustments can, at least partially, be carried out through quantities. Such a theme is evidently at the heart of Keynesian economics, as Clower and Leijonhufvud pointed out. Further progress in the domain has in a large proportion been made along two lines.
The first is the construction of general microeconomic models abandoning the assumption of competitive equilibrium on all markets. A first category of these models assumes some degree of price rigidity and studies the associated quantity adjustments: Glustoff (1968); Dreze (1975); Benassy (1975a, 1975b, 1977); Younès (1975); Grandmont-Laroque (1976); Malinvaud-Younès (1978); Bohm-Levine (1979); and Heller-Starr (1979). A second category addresses the problem of noncompetitive price formation: Negishi (1961, 1972); Benassy (1976); and Hahn (1978). As we shall see in Section 1.5, these two types of models can actually be synthesized.
The second line of research consists of constructing specific aggregated models to study macroeconomic themes such as unemployment or inflation: Solow-Stiglitz (1968); Younès (1970); Barro-Grossman (1971, 1974, 1976); Grossman (1971); Benassy (1973, 1974, 1978a, 1978b); Malinvaud (1977); Negishi (1978, 1979); Hildenbrand-Hildenbrand (1978); Dixit (1978); and Muellbauer-Portes (1978).
The notion that competitive markets facilitate the efficient production and allocation of resources in a decentralized manner, that is, without a complete exchange of information among economic agents, is an old one, going back at least to the “socialist controversy” and perhaps to Adam Smith. Phrasing this “conventional wisdom” in this way emphasizes the premise that economic agents come to markets with diverse information that is not publicly available, or at least only at substantial cost. The mention of information implies the prior existence of uncertainty about something, whether that uncertainty be probabilistic or not. I suppose it was to be expected that a rigorous and systematic examination of the conventional wisdom had to await the elaboration of a systematic and rigorous theory of competitive equilibrium under certainty, a process that culminated in the work of Arrow and Debreu.
Although the Arrow-Debreu model was originally put forward for the case of certainty, an ingenious device enabled the theory to be reinterpreted to cover the case of a market in which all relevant future information was to be publicly available, and one could extend to this case, in a natural way, the theorems on the existence and optimality of competitive equilibrium in the case of certainty. Subsequent research, however, has shown that in many cases the presence of private information can prevent a competitive equilibrium from being efficient (given the structure of information), and can even prevent the existence of an equilibrium. This chapter summarizes recent results on two topics in this area: (1) rational expectations equilibrium and the information revealed by prices, and (2) principal-agent and other profit-sharing relationships.
Many econometric models are based on sets of simultaneous structural equations. Although there are methods for estimating the parameters of the entire system, procedures for single equations are relevant, because often only one in a small number of equations is of interest and because such procedures are much easier to carry out than full-system methods. Recently, the properties of these single-equation methods have been investigated extensively. The purpose of this chapter is to review some of these studies. Because this chapter is limited, we shall focus our attention on the two-stage least-squares (TSLS) estimator and the limited-information maximum-likelihood (LIML) estimator, which is also known as the least-variance-ratio estimator. Much of the work reported here has involved my associates Takamitsu Sawa, Naoto Kunitomo, and Kimio Morimune. The emphasis is on comparison of the TSLS and LIML estimators based on finite-sample distributions. We shall also comment on the higher-order efficiency of the LIML estimator and some improvements.
In the late 1960s and throughout the 1970s an important event has occurred for empirical economists. The United States government and other agencies have adopted the tool of experimentation to investigate important social and economic questions. To date, experiments have been conducted in income maintenance programs, electricity time-of-day prices, housing allowance subsidies, medical reimbursement, and numerous other areas. These experiments have involved major expenditures of economic resources. For instance, the cost of the four negative income tax (NIT) experiments has been in excess of $100 million, and the time-of-day (TOD) price experiments have cost about $25 million. To some extent these experiments have attempted to replicate the methods developed over the past 60 years and applied with outstanding success in many biological and physical sciences. R. A. Fisher's influential work in agricultural experiments, along with the work of many other statisticians, has firmly established the usefulness of experimentation as a tool of scientific inference.
Although I do not intend to judge the overall usefulness of the many economics experiments to date, I think that a fair statement is that they have enjoyed mixed success. Much knowledge has been gained from these experiments, and many low-income individuals have benefited during the course of these experiments. On the other hand, they have not settled the questions that they were designed to answer as definitely as have experiments in the other sciences.3 In this chapter I shall attempt to investigate the reasons for this lack of definite results. One problem that has arisen is that many of these experiments have tried to answer too many questions. That is, the experimental design has contained too many elements for the given size of the budget and for the precision with which econometric models can be estimated.