To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The topic of simultaneous stochastic equations will be familiar to readers who have some acquaintance with empirical economics, for it is in econometrics that this methodology has been worked out most fully. On the other hand, one can conceive of situations where variables are jointly determined as the solution to simultaneous equations in other contexts. In population dynamics, for example, the number of predators (lynxes, say) may depend upon the number of prey (snowshoe hares), and likewise the number of hares will depend upon the number of lynxes; and although such a dependence will unquestionably have dynamic elements, it is nevertheless essentially simultaneous. In view of the historical development of simultaneous stochastic equation theory, however, it is perhaps inevitable that an economic flavor creeps into the discussion. For this reason, it will be as well to commence with a very brief and simple outline of the essential problems involved in the simultaneous context. The reader who can understand these examples should have no difficulty with the remainder of the chapter. Indeed, it will be found that the methodology of instrumental variables, as it has appeared in previous chapters, offers a relatively painless approach to the topic for the reader whose statistical numeracy outruns his economic literacy. On the other hand, the econometrician may like to skip directly from this point to Section 4.2, perhaps pausing to look at the chapter outline at the end of the present section.
In presenting the basic theory of instrumental variables estimation, we assumed that the error structure was spherical; that is, the disturbance terms ut have a common distribution with variance σ2 (homoscedasticity) and are also serially uncorrelated in the sense that ɛut ut-r = 0 for r≠0. Expository convenience aside, certain models do indeed fall into such a framework, and we discussed the errors-in-variables structure as an example. However, the sphericality assumption is less satisfactory in other contexts. Indeed, in some instances, the very necessity for an IV approach arises out of a nonspherical error structure. In the present chapter we shall explore some of the implications of nonsphericality in the disturbances, utilizing for the purpose models taken from time series analysis that involve serial correlation and a class of models exhibiting heteroscedasticity.
We commence with a general discussion of different definitions of the IV estimator where the error distribution is nonspherical; specifically, the covariance of the disturbance vector is a nondiagonal matrix of constants. We have already touched on this context in Section 1.2, where the estimator (1.26) associated with the minimand (1.25a) was suggested as appropriate for the nonspherical case. We shall, in the present chapter, further explore the nature and properties of this estimator. Initially, however, we shall find it profitable to take a rather different route to its derivation, for by doing so a second type of estimator is suggested. We are able to generate two generic kinds of estimator, the one interpretable as an ordinary least-squares analog, the other as an Aitken analog. The efficiency comparison of these two approaches is explored.
The burgeoning of interest in nonlinear equations and models that has occurred in the last decade or so has been largely contemporaneous with the enhancement of computing power and the availability of convenient and effective algorithms for numerical optimization. Thus whereas students in the fifties and early sixties were preoccupied with linear models or their direct generalizations, the seventies saw the establishment of a better understanding of the estimation theory for nonlinear models. In particular, it was realized that instrumental variables methods could, by the definition of an appropriate minimand, be regarded as a minimization problem and the resulting estimators regarded as fairly natural generalizations of the linear theory, with respect to both limited- and full-information systems. At the same time it became clear that there were limits to this process of generalization - that certain efficiency properties, for instance, did not carry over to the nonlinear context.
In setting out to describe these developments, the first task is to establish some kind of taxonomy of the types of models encountered. One may distinguish between models that are nonlinear only in their parameters, or only in their variables, and models that are nonlinear both in their equations and in their variables. The relevant models are set out, with examples, in Section 5.2. In this section we use the relatively simple context of linear-in-parameter models to establish certain generic kinds of instrument.
The method of instrumental variables (IV) has traditionally been viewed as a response to a common problem in regression contexts, namely where one or more of the regressors on the right-hand side of the proposed equation are correlated with the equation disturbance. If this happens, the method of ordinary least squares suffers from consistency problems. The instrumental variables methods were developed to overcome these problems. It could legitimately be objected that the focus on consistency alone as a criterion of statistical effectiveness is misplaced. Thus it is often the case that estimators that are consistent possess inferior meansquare error properties to those that are not. Remarkably enough, however, the IV methodology can in many circumstances provide estimators that have superior efficiency properties all round. Indeed, it will be one of the themes of the later chapters of this book that the method of maximum likelihood may, in certain contexts of importance, itself be regarded as an instrumental variables estimator, so that IV estimators are asymptotically fully efficient. In the present chapter, however, we shall lower our sights a little and consider the motivation for instrumental variables as arising from requirements of statistical consistency.
One of the major theoretical issues that underlies, implicitly or explicitly, quite a few recurrent controversies in macroeconomics is whether a competitive monetary economy has built-in mechanisms that are strong enough to remove excess demands and supplies on all markets, through an automatic adjustment of the price system.
Debate on this issue was most intense after the publication of Keynes's General Theory in 1936. Keynes denied, in complete contradiction to the then-prevailing doctrine, that price and wage flexibility would guarantee market clearing. On the contrary, he claimed that a fully competitive economy could well get "trapped" into a severe disequilibrium (unemployment) situation.
Pigou (1943) began the counterattack, which was subsequently developed over the years by Patinkin (1965), Friedman (1956, 1969), and Johnson (1967), among a host of others. The essential argument was that Keynes overlooked an important class of regulating mechanisms, namely, the real balance, or wealth effects, which are associated with a general movement of nominal prices and wages and/or with a variation of nominal interest rates. A broad agreement was reached in the 1950s, known as the "neoclassical synthesis": If such wealth effects were properly integrated in the analysis, full price flexibility - by which is meant that the price system reacts infinitely fast to a market imbalance at every moment, e.g., through a Walrasian tȃtonnement process - was bound to remove all excess demands and supplies, both in the short run and in the long run. Keynes was theoretically mistaken, and the unemployment he talked about was entirely due to his assumption that nominal wages were rigid downward.
I come now to an examination of the dynamics of the system, the way the variables move out of equilibrium. The main assumptions used to secure stability are those of No Favorable Surprise as discussed in Chapter 4. These are the assumptions that ensure that new opportunities neither arise nor are perceived to do so. However, there are other properties of the motion of the system which it seems reasonable to assume. Since the proof of stability is not the only end of dynamic analysis and since the class of adjustment processes which are consistent with No Favorable Surprise needs to be studied, I discuss such properties as well in the hopes that such discussion will prove useful for further work.
To put it another way, given No Favorable Surprise, the class of models for which the stability result holds is quite general. On the other hand, that very generality means that we do not gain a great deal of information from that result about the workings of the model. To the extent that additional assumptions seem reasonably calculated to restrict the behavior of the model in directions that real economies may be supposed to take, it is useful to discuss such assumptions even though this book will not itself go beyond the proof of stability under No Favorable Surprise.
There is another reason for proceeding in this way. Were I to stop with a discussion of No Favorable Surprise, there would remain some question as to whether models with sensible dynamic assumptions could in fact be fitted into the framework used. By discussing such questions as individual price adjustment, orderly markets, and the problems of non-delivery within the context of the model I show that this is not an issue.
In this chapter, I return to the Hahn Process models discussed in Chapter 2 and treat them more precisely than was done there. This enables the introduction of the notation used later in the book. More important, it facilitates understanding of some issues which arise again in the context of more complex and satisfactory models. For the most part, these issues were discussed in the previous chapter and nontechnical readers may proceed directly to Chapter 4 with little loss of continuity.
Two models are discussed in the present chapter. First, I treat the case of pure exchange without the introduction of money. As explained in the previous chapter, this model embodies the basic feature of the Hahn Process. It is very easy to see what is going on in a context that lacks the increasingly complex apparatus of later versions.
The second model treated below adds the complications of firms and of money. However, actual production and consumption out of equilibrium are not permitted. The analysis of this model permits an understanding of the role of firms and introduces a number of the problems which later appear. Production and consumption out of equilibrium will not be separately treated. As explained in the preceding chapter, they are easiest to introduce in the context of a relatively rich disequilibrium model where agents understand what is happening and care about the timing of their actions. Hence, disequilibrium production and consumption are introduced only in the full model of Part II.
This chapter will examine, with the help of a simple microeconomic model, two propositions that play a significant role in neoclassical monetary theory.
The first proposition is that “money does not matter” - or, more precisely, that if the mere presence of money as a medium of exchange and as an asset is important for the smooth functioning of the economy, the quantity of money is unimportant. This is the quantity theory tradition, which claims that a change in the money stock will change all nominal values in the same proportion, but will have no effect on “real” variables. This old tradition still plays an important role in modern thinking. We wish to clarify the exact meaning of this theory and its domain of validity.
The second issue will be the belief, which is shared by many theorists, that a short-run Walrasian equilibrium in which money has positive exchange value usually exists. We shall investigate this question by looking at a simple model involving only outside money. “Money” is then printed money, and can be regarded as a part of private net wealth. In such a context, neoclassical theorists assume that the traders' price expectations are “unit elastic,” so that expected prices vary proportionally with current prices. The essential short-run regulating mechanism is then the real balance effect. When money prices of goods are low, the purchasing power of the agents' initial money balances is large. This fact should generate, according to this viewpoint, an excess demand on the goods market at sufficiently low prices.