To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter develops two empirical applications of quantile regression techniques. The first application examines how the returns to schooling have changed from 1979 to 1987. The second application examines the union relative wage effect in 1987. In both cases our interest is in providing a more detailed description of the conditional distribution of wages.
The returns to schooling application is based on forming schooling-experience cells, calculating various sample quantiles within each cell, and then using a minimum-distance estimator to impose a parametric form on the conditional quantile functions. There is a censoring problem because usual weekly earnings in the Current Population Survey are topcoded at $999. Our framework makes it very easy to apply Powell's (1984, 1986) approach to the censoring problem - we only use the cells for which the sample quantile is below the censoring point. The censoring correction has a substantial effect on the estimates in some cases. The returns to schooling tend to increase as we pass from low to high quantiles, both in 1979 and 1987. The returns are substantially higher in 1987 than 1979, and the change is fairly uniform across the quantiles.
The second application does not lend itself to forming cells since there are close to 40 covariates. A linear programming algorithm is, however, effective. We find, for the more experienced workers, that the union wage effect declines sharply as we go from low to high quantiles. There is a similar pattern for the industry wage effects in several of the durable manufacturing industries. We also generalize the functional form of the quantile regressions by using a Box-Cox transformation. Our algorithm combines linear programming with a one-dimensional search, by appealing to the equivariance of quantiles under monotone transformations.
This chapter describes a setup within which dynamic stochastic competitive models can quickly and inexpensively be formulated and analyzed. This kind of model is used in macroeconomics and finance to build theories about how variables like consumption, investment, asset prices, and interest rates covary over time, and how their evolution is to be interpreted in terms of parameters governing preferences, technology, and information flows. The idea here is to rig a class of economic environments with descriptions of preferences, technologies, and information structures that occur in the form of a set of matrices. By naming a particular list of matrices, we completely specify an economic environment. Given these matrices, we supply a set of easy to use computer programs that compute and characterize the competitive equilibrium prices and allocation. The class of models is specified so that an equilibrium can be represented as a state space system in which the state vector evolves according to a first-order linear vector stochastic difference equation. This feature emerges because the models assume quadratic preferences and linear technologies and information structures. The linearity of the equilibria enables us to characterize them in terms of standard objects of time series econometrics. Thus, given a representation of our equilibrium in state space form, it is straightforward to deduce both impulse response functions with respect to the innovations impinging on agents' information sets and vector autoregressive or Wold representations for any subset of variables linearly related to the state vector. It is also possible to analyze the effects of aggregation over time and various sources of measurement error.
Since macroeconomists first began the systematic study of aggregate data, they have grappled with the fact that most economic time series exhibit substantial seasonal variation. In general, macroeconomists abstract from this seasonal variation, both in their models of cyclical behavior and in their empirical testing of these models. This standard practice is a useful simplification if two key conditions hold. The first is that there are no interactions between seasonal cycles and business cycles: they result from different exogenous factors and different economic propagation mechanisms. The second is that there are no important welfare issues attached to seasonal fluctuations per se: optimal government policy toward seasonsals is simply to leave them alone.
The purpose of this chapter is twofold. It first summarizes recent work demonstrating that seasonal cycles and business cycles are intimately related, displaying similar stylized facts and being driven by similar economic propagation mechanisms. The essay then discusses the possible welfare implications of seasonal cycles, suggesting there is no reasonable presumption they are uninteresting from a welfare or policy perspective. Taken together, these results imply the need for a significant reorientation in economists' treatment of seasonal fluctuations. Rather than a component of the data to be adjusted away and treated as noise, seasonal fluctuations represent a key topic of economic analysis. They contain significant information about the nature of business cycles, and they require analysis in their own right because they may induce significant welfare losses.
A STATISTICAL TEST FOR THE STATIONARITY AND THE STABILITY OF EQUILIBRIUM
Introduction
One of the most important developments in econometrics in the 1980s is what has been conveniently summarized as the unit root. Concerning the least-squares estimator of an autoregressive parameter, say p, its asymptotic distribution when the true p is unity is different from that when | p| is less than unity, as shown by Dickey and Fuller (1979) and Phillips (1987). The point is important in applied econometrics because the finite sample distribution when |p| is less than but near unity resembles the asymptotic distribution for p = 1 more closely than the asymptotic distribution for |p| < 1. One implication is that the power in testing p= 1 against |p| < 1 by the least-squares estimator is bound to be low when the sample size is not very large. Nevertheless we are frequently compelled to discriminate between p= 1 and |p| < 1 by the economic and statistical problems.
However, as judged from the articles published or yet unpublished as of the time of the present writing, the latest research efforts are somewhat counteracting to the previous ones: (1) The Bayesian inference contains no such anomaly as found in the sampling approach (see Zellner, 1971, p. 187), but a number of problems arise on the prior. Sims (1988) points out that economic theories do not necessarily justify a sharp point prior placed on p = 1, and Wago and Tsurumi (1991) refer to the inference problems caused by such a prior. Phillips (1991) criticizes the flat prior as a representation of ignorance. (2) The wisdom of taking p= 1 for the null hypothesis has been questioned, and the stationarity for the null hypothesis has been investigated in Park (1990), Fukushige and Hatanaka (1989), Ogaki and Park (1989), Fisher and Park (1990), and Bierens (1990). Schotman and van Dijk (1990, 1991) treat the stationarity and the unit root symmetrically and derive a posterior odds ratio. (3) As regards Nelson and Plosser (1982), which revealed the importance of the unit root in economic data, Schmidt and Phillips (1992), Choi (1990), and Haldrup (1990) find drawbacks in the method used. Each proposes a revised method within the framework of sampling approach, and Choi (1990) in particular argues that his revision reverses the conclusion of Nelson and Plosser. From the standpoint of the Bayesian inference DeJong and Whiteman (1991) also reverse the conclusion of Nelson and Plosser (1982). It seems that the Box-Jenkins modeling of trends should be considered with a grain of salt.
In order to understand and formulate economic theories, we tend to classify the types of movements which characterize economic time series as trend, cyclical, seasonal, and irregular. The idea that each component has separate and different causal forces is implicit in many of the discussions on the decomposition. Among the four components, two were considered to be of prime interest to economists. Whereas theories of economic growth suggest models which explain the secular or trend component of economic aggregates, the bulk of macroeconomics focuses on models explaining the stylized facts of the reoccurring cyclical component. The other two components, namely the seasonal and the irregular, were mostly viewed as a nuisance and of no major interest to us economists for the simple reason that we have almost no theoretical developments on economic models of seasonality. Consequently, without any interest in seasonality and considering that it was common until recently to separate growth models from business cycle models, the large majority of empirical macroeconomics has adopted a strategy of seasonally adjusting and detrending each series separately prior to any inference about the business cycle.
Lately economic theorists studying stochastic growth theory have suggested models integrating the growth and cyclical components of economic time series, viewing expansions and contractions simply as the acceleration and the slowing down of the overall economic growth process. Time series econometricians, on the other hand, focused their attention on the econometric estimation and testing of parametric models with trending processes. Nowadays empirical macroeconomists tend to be more careful about trends and pay more attention to issues such as common trends and the interaction of cyclical and secular fluctuations.
Much econometric data are collected as time series. It is rarely felt reasonable to assume that an observed time series realizes a sequence of independent and identically distributed (iid) random variables. As a result, a huge variety of parametric, semi-parametric, and non-parametric models has been proposed to describe aspects of the behavior of time series. These models have found several uses. They have been used in forecasting. They have been used to measure the dependence between economic variables. They have been used to test hypotheses propounded by economic theory. In a more indirect way, they have been used to describe latent, unobserved, variates, which may be of economic interest in themselves, or which, as “disturbances,” have properties which are relevant to the development of robust and efficient rules of statistical inference.
Relaxing only the “identity of distribution” part of the iid assumption affords considerable extra generality. Apparently smooth, persistent changes in level or periodic fluctuations have, for many years, been modeled by polynomial or trigonometric functions of time, which are fitted by linear regression techniques. More recently, it has been found possible to estimate regression functions of time that are non-parametric and regression coefficients that are non-parametric functions of time. Semi-parametric regression models, which for example involve a parametric component and an additive non-parametric function of time, have also been of recent interest. Not only the mean, but the variance and other features, have been modeled as parametric or non-parametric functions of time.
Much of the mathematics discussed thus far has probably seemed both familiar and useful. Topology is quite different. Most economists pass their lives quite innocent of its existence. And those who happen to make contact with this branch of mathematics will, likely as not, conclude that it is useless: a web of closely interwoven, almost circular definitions that don't seem to “do” anything. In fact topology is crucial to the development of a rigorous theory of competition. Without it we could not move past the definitions, notation, and examples of the preceding chapters to the theorems and proofs of the chapters which follow. For now you have to accept that on faith.
How then are we to come to grips with this subject? You need a better understanding of topology than a simple summary of the “main facts” would provide. On the other hand, attempting to compress a course in topology into the confines of a single chapter is clearly impossible, and probably worthless as well. What I have chosen to do instead is to “tell a story” about topology, to paint a mental picture of what it tries to accomplish and how. Don't try to absorb all of the details at once because that is scarcely possible and totally unnecessary. Try instead to grasp the main ideas, the broad outline of what topology is about. As we progress through the book and you see how topology is applied the subject will come to seem more and more natural.
We have learned that Walrasian equilibrium exists in a wide variety of circumstances. But establishing existence only partially validates the competitive hypothesis. The Edgeworth box notwithstanding, markets with two consumers are unlikely to be competitive. How then do we recognize settings in which the Walrasian model is appropriate, where price taking is the right thing to do? This is the question which this final chapter tries to address.
We cover a lot of ground. The chapter begins with a diversion, a proof of the Second Fundamental Theorem of welfare economics. Apart from its intrinsic interest, this theorem provides some useful background for material presented later. The second section addresses our major question in terms of core equivalence and core convergence. The third section looks at the issue of competition from another point of view, examining Walrasian equilibrium when the number of commodities is not finite. As I argue in the final section, allowing for a double infinity of consumers and commodities — the large square economy — provides the right setting for addressing the question: What is competition?
The Second Fundamental Theorem
All Walrasian equilibria are efficient. This familiar claim does not offer much solace to a starving person. The Second Fundamental Theorem of welfare economics is meant to put a kinder face on competition, suggesting that — with suitable taxes and transfers — the market can be induced to support any Pareto optimal allocation whatsoever. Although rather naive as a statement of social policy, proving the Second Fundamental Theorem is worthwhile nonetheless: not for its social content, but as a useful technical fact about the Walrasian model.
Mathematics is a language. This sentiment, expressed by the physicist Willard Gibbs, seems rather apt, especially when it comes to learning mathematics. Gaining fluency in mathematics and mathematical economics has much in common with learning a foreign language. Worrying too much about vocabulary lists and good grammar is a good way to kill your interest in the subject. Taking the plunge, trying to speak the language despite the errors you make early on, is not only more effective but also more enjoyable. This book is aimed at the economist willing to accept such a strategy of total immersion. This does not mean that I avoid the careful statement of assumptions, the crafting of rigorous proofs, and the like. Learning to do those things is an important part of any course in mathematical economics. What makes the approach adopted here distinctive is less a matter of substance than of style. Emphasizing economic intuition, I concentrate first and foremost on helping you develop a good ear for what the language of mathematical general equilibrium theory is trying to say. Learning the vocabulary and the grammar is easy once you can follow the conversation. What this means for the moment is that you should not worry too much if I seem to move rather quickly. You can always come back later to focus on the details.
The main goal of this chapter is to learn how to translate the competitive model of a pure exchange economy into the formal language of sets, functions, vector spaces, and linear functionals. This sounds more intimidating than it is.
Why bother with existence? Doubts on this score are too commonplace among economists to be dismissed out of hand. So let us face the issue squarely. What makes existence proofs unappealing? A typical reaction might be:
What is the point of reading the proof, much less trying to understand it? The fact that equilibrium exists is really quite obvious from an intuitive, economic point of view. I suppose it is good that someone has worked out the math, but I would rather be spared the gory details. Let's get on to the more interesting parts of economics!
The problem with this reaction is that it fails to realize what proving existence is all about. If validation of standard operating procedure were the major contribution of existence proofs, then most of us would probably be more than happy to learn that our models pass the test and to leave the details to specialists. But in fact the conclusion reported in an existence proof is of secondary importance. What really matters is the understanding gained in proving the result.
The fact that insight is what we are after is why we are going to prove existence in several different ways, spread over two different chapters. It also explains why we will not strive for the most general results possible. The settings I have chosen for exploring existence are complex enough to bring out the important issues, but not so complex that they obscure what is going on.
The preceding chapter approached the question of existence of Walrasian equilibrium by searching for prices which clear markets. This chapter explores an alternative approach, no less intuitive, which relies not on the summation of best responses but rather on their Cartesian product. Although the method is most directly associated with the work of Nash on noncooperative games, its roots reach back to Cournot in the early part of the nineteenth century.
The first section presents some basic concepts and existence proofs for equilibria of a noncooperative game and its close relative, an abstract economy. The second section applies these notions to proving existence for the traditional Arrow-Debreu economy. The remaining three sections pursue applications with a more exotic flavor involving equilibrium in the presence of externalities, nonconvexities, and nonordered preferences.
Noncooperative game theory
Nash equilibrium
We begin with a very selective presentation of some features of noncooperative game theory which are relevant to our present purpose.
Game theory has its own particular vocabulary which is rather different from that used to describe a Walrasian economy. The participants in a (noncooperative) game are called players, the choices they make strategies, and the benefits they derive from playing the game payoffs. While the terminology is quite different, by a judicious choice of notation we can highlight the similarities between the description of a game and a Walrasian economy.
Let I = { 1,…, n} denote the set of players in the game. Each player i ∈ I selects a strategysi from a fixed strategy set Si ⊂ L where L is a finite dimensional topological vector space.