To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Surveys of the participants in organized securities markets indicate that traders hold widely different beliefs about the future course of economic activity. Expectations not only differ, but they evidently respond significantly to unexpected movements in such variables as real economic growth and the weekly changes in the money stock [see Cornell (1983) for a review of some of this literature]. Furthermore, French and Roll (1984) have found that the variance of stock prices is greater over periods when the stock market is open than when it is closed. Together, these observations suggest that disparate beliefs and the sharing of information through the trading process may be important ingredients in modeling asset-price determination. The purpose of this paper is to explore the implications of disparate expectations for the time-series properties of asset prices in the context of a simple model with competitive traders facing serially correlated shocks.
While the models examined are partial equilibrium in nature, this exploration is motivated in part by the apparent inconsistency of representative agent, dynamic equilibrium models with the behavior of asset prices. The variances and autocorrelation functions of asset returns seem to be inconsistent with the implications of both linear expectations models [see, e.g., Shiller (1979, 1981), Singleton (1980, 1985), Scott (1985)] and the nonlinear models studied by Hansen and Singleton (1982), Ferson (1983), Dunn and Singleton (1985), and Eichenbaum and Hansen (1985), among others.
A predominant focus of macroeconomic research in the last ten years has been on the origins of the business cycle. In particular, it has been popular to view the business cycle as arising from surprise movements in aggregate demand and to argue that these impulses are transmitted to real activity through movements in the price level. In order to generate empirically relevant fluctuations, however, such models must incorporate mechanisms to propagate price surprises over time. That is, to replicate economic fluctuations, it is necessary to transform serially uncorrelated price surprises into serially correlated macroeconomic time series. Unfortunately, despite the large amount of effort devoted in recent years to this type of equilibrium business cycle modeling, relatively little attention has been focused on isolating the empirically important propagation mechanisms.
More recently, we have pursued a line of research that we call “real business cycle theory,” in which disturbances are propagated over time as a result both of economic agents' desire to smooth commodity profiles and capitalistic production with rich intertemporal substitution opportunities. To date, however, these models incorporate only real supplyside or technological disturbances, abstracting from real demand-side influences (such as government spending) or nominal shocks. Nevertheless, the results on propagation mechanisms appear to be relevant for more fully developing the monetary theories of business fluctuation discussed above.
Of course, interest in propagation mechanisms is not new. Indeed, it was the major focus of many of the interwar business cycle theorists.
Arguments in favor of economic policy intervention are usually based on the existence of externalities. This mode of argument, long a tradition in microeconomics, is relatively new to macroeconomics. In the 1960s, for example, when Milton Friedman outlined in Capitalism and Freedom the pros and cons of government policy in many areas of economics, he centered his discussion around the existence of externalities in every area except macroeconomics.
Ever since the start of research on the microfoundations of macroeconomics in the early 1970s, many studies have attempted to correct this omission by casting proposals for macroeconomic policy in an externality framework. The vast majority of these studies has been concerned with externalities that relate to whether the natural or average rate of employment is inefficient. Few have been concerned with whether the observed fluctuations in employment around the natural rate are inefficient. In his 1972 book Inflation Policy and Unemployment Theory, Edmund Phelps summarized over a dozen externalities, all suggesting that the natural rate of employment is inefficient and higher than the optimum level of unemployment. Phelps mentioned externalities due to imperfect competition, information spillovers about conditions in the labor market from employed to unemployed workers, overpricing of labor due to lemon problems, failure to incorporate the value of self-respect from a good job, external effects of on-the-job training and experience, and income taxes that discriminate in favor of leisure.
A rapidly growing line of research has recently begun to appear on the rigorous use of microeconomic and aggregation-theoretic foundations in the construction of monetary aggregates. Much of the attention derives directly or indirectly from Barnett's (1980a) challenging paper, where he voiced objections to simple-sum aggregation procedures and derived the theoretical linkage between monetary theory and index number theory. He applied economic aggregation and index number theory to construct monetary aggregates based upon Diewert's (1976) class of “superlative” quantity index numbers. The new aggregates are Divisia quantity indexes, which are elements of the superlative class.
A number of recent works have provided a sharp quantitative assessment of the relative merits of summation versus Divisia monetary quantity indexes. Barnett, Offenbacher, and Spindt (1984), for example, compared the empirical performance of Divisia and simple-sum monetary aggregates in terms of various policy criteria such as causality, information content of an aggregate, and stability of money demand equations. Their main finding is the better performance of Divisia aggregates, especially at high levels of aggregation. Similarly, Serletis and Robb (1986) estimate the degree of substitutability between the services of money and checkable savings and time deposits (over institution types) in a quasihomothetic translog utility framework. They investigate both summation and Divisia aggregation of the assets, and provide evidence further supporting the superiority of the Divisia aggregates.
A totally unresolved problem, however, is the method by which monetary assets are selected to be included in the monetary aggregate.
The Miller–Modigliani theorem asserts that, in a setting of perfect capital markets, economic decisions do not depend on financial structure. An implication is that the addition of financial intermediaries to this type of environment has no consequence for real activity.
A number of recent papers [e.g., Bernanke (1983), Blinder and Stiglitz (1983), Boyd and Prescott (1983), Townsend (1983), and Williamson (1985)] have questioned the relevance of this proposition, even as an approximation, for macroeconomic analysis. Instead, they revive the view of Gurley and Shaw (1956), Patinkin (1961), Brainard and Tobin (1963), and others that the quality and quantity of services provided by intermediaries are important determinants of aggregate economic performance. The basic premise is that, in the absence of intermediary institutions, informational problems cause financial markets to be incomplete. By specializing in gathering information about loan projects, and by permitting pooling and risk-sharing among depositors, financial intermediaries help reduce market imperfections and improve the allocation of resources. Thus, changes in the level of financial intermediation due to monetary policy, legal restrictions, or other factors may have significant real effects on the economy. For example, Bernanke (1983) argued that the severity of the Great Depression was due in part to the loss in intermediary services suffered when the banking system collapsed in 1930–33.
The objective of this paper is to provide an additional step toward understanding the role of financial intermediaries (hereafter, simply “banks”) in aggregate economic activity.
Abstract: This paper studies household asset demands by allowing certain assets to contribute directly to utility. It estimates the parameters of an aggregate utility function that includes both consumption and liquidity services. These liquidity services depend on the level of various asset stocks. We apply these estimates to investigate the long- and short-run interest elasticities of demand for money, time deposits, and Treasury bills. We also examine the impact of open market operations on interest rates, and present new estimates of the welfare cost of inflation.
This paper studies households' demand for different assets by allowing certain assets to contribute directly to household utility. We permit the utility function to capture the “liquidity” services of money, certain time deposits, and even some government securities. Our approach yields estimates of the utility function parameters which can be used to study the effects of a variety of changes in asset returns. We investigate how asset holdings and consumption react to both temporary and permanent changes in returns, and study the effects of government financial policy.
Our approach provides an integrated system of asset demands of the form that Tobin and Brainard (1968) advocate for studying the effects of government interventions in financial markets. It provides a tractable alternative to the atheoretical equations that are commonly used to study the demand for money and other assets. Those equations, which cannot be interpreted as the rational response of any economic agent to changes in the economic environment, are unlikely to remain stable when the supply of various nonmonetary assets changes.
The circular flow of purchasing power is a staple of introductory economics. Yet it scarcely appears in recent theoretical work. Examples of explicit modeling of the circular flow are Lucas (1980) and our (1985b). In these models, stochastic expenditure patterns and limited investment and borrowing opportunities result in a distribution of holdings of fiat money. This analysis focuses on the determinants of prices as well as the distribution of money holdings. In the absence of analyses of richer menus of financial assets, it seems more appropriate to think of these models as reflecting the finance constraint of Kohn (1981) rather than the money constraint of Clower (1967).
In the model, we distinguish two groups of agents – workers and capitalists. There is a Walrasian labor market and a sequential search retail market where prices are set by capitalists. There is a circular flow of money with workers holding money while waiting for stochastic purchasing opportunities and capitalists simply transfering money between markets (mail float).
Our assumptions are strong; they enable us to solve the model explicitly in the steady state. The model economy has a unique steady-state uniform price equilibrium in which the greater the efficiency of the search process in the retail market, the higher the levels of nominal and real wages. Whether the nominal price increases or decreases with search speed depends on the other parameters.
Abstract: This paper is concerned with the optimal inflation rate in an overlappinggenerations economy in which (i) aggregate output is constrained by a standard neoclassical production function with diminishing marginal products for both capital and labor, and (ii) the transaction-facilitating services of money are represented by means of a money-in-the-utility-function specification. With monetary injections provided by lump-sum transfers, the famous Chicago Rule prescription for monetary growth is necessary for Pareto optimality; but a competitive equilibrium may fail to be Pareto optimal with that rule in force because of capital overaccumulation. The latter possibility does not exist, however, if the economy includes an asset that is productive and nonreproducible – that is, if the economy is one with land. As this conclusion is independent of the monetary aspects of the model, it is argued that the possibility of capital overaccumulation should not be regarded as a matter of theoretical concern, even in the absence of government debt, intergenerational altruism, and social security systems or other “social contrivances.”
Introduction
Most of the existing analyses of the optimal inflation rate that have been carried out in models with finite-lived individuals have reached conclusions that seem to contradict the famous Chicago Rule for optimal monetary growth. An exception is provided by McCallum (1983, p. 38), which suggests that analysis of overlapping-generations models is supportive of the Chicago Rule provided these models take account of the transactionfacilitating (i.e., medium-of-exchange) services of money.
Abstract: Knowledge of the extent to which monies of different countries can substitute for each other is important for the design and implementation of monetary policy. However, existing empirical analyses of money demand in open economies rest on official estimates of money holdings that imply an infinite elasticity of substitution among different monetary assets. Existing analyses of Divisia monetary aggregates do not impose such an assumption, but do not allow foreign exchange considerations. This paper combines both approaches into a unified explanation of domestic money holdings. The empirical analysis suggests that U.S. Divisia money holdings are influenced by foreign exchange considerations. Foreign monetary policies, through their effects on both foreign interest and exchange rates, might influence domestic monetary policy directly via currency substitution.
Introduction
The purpose of this paper is to determine whether domestic money holdings are influenced by foreign exchange considerations, an influence generally known as currency substitution. Intuitively, one would expect such considerations to influence holdings of domestic money, given the increased integration of international markets and the interdependency of asset holdings. As a result, changes in either foreign interest rates or exchange rates should induce changes in the optimal portfolio mix with a corresponding impact on domestic money holdings.
Knowledge of whether currency substitution exists is important for the design and implementation of monetary policy for several reasons. First, the intended effect of an open-market operation will not materialize if offsetting portfolio changes take place through currency substitution.
In Part III of this volume, Barnett's paper (Chapter 6) brings together all of the currently available aggregation and index number theory relevant to monetary aggregation under perfect certainty. That theory includes demand side aggregation theory – which deals with aggregation over monetary assets when the aggregator functions are weakly separable input blocks in production functions or in utility functions – and supply side aggregation theory, which deals with aggregation over monetary assets when the aggregator functions are weakly separable output blocks in the transformation functions of multiproduct financial intermediaries. The theory also deals with aggregation over firms and consumers, technical change, and value added in financial intermediation. However, when uncertainty exists, the theory presented in Barnett's paper assumes risk neutrality. In that case all economic agents can be viewed as solving decision problems in a perfect-certainty form, with all random variables replaced by their expectations. In other words, a form of certainty equivalence is assumed.
Poterba and Rotemberg's paper
The risk neutrality assumption produces tremendous simplification and provides access to all of the existing literature on aggregation and index number theory. Nevertheless, if firms and consumers are very risk-averse, a more general approach to modeling uncertainty could be preferable. The Poterba and Rotemberg paper (Chapter 10) seeks to tackle that difficult problem through the use of the expected utility approach to modeling decisions under uncertainty.
In this paper, I discuss two policy questions which I regard as unresolved: Should interest be paid on money and should currency provision be in the hands of the government? An affirmative answer to the first question stands as one of the few widely accepted general results of monetary theory. My discussion is intended to cast doubt on it. There is no widely accepted answer to the second question; some have asserted that currency provision is a public good while others have asserted that currency provision should be left to the market. My discussion of currency provision will not provide a resolution. Instead, I will discuss a way of formulating the question that seems to offer some hope for resolving it.
Payment of interest on money
The casual statement on the case for paying interest on money is familiar. Real balances are produced at zero social cost. In an equilibrium in which the real yield on other assets exceeds that on money – or, more generally, in which the marginal rate of substitution between future consumption and present consumption exceeds the real return on money – individuals face a positive alternative cost of holding money. Given the zero social cost, this positive alternative cost implies that too little money is being held. Payment of interest on money removes the positive alternative cost. I focus on one aspect of this casual statement: How does an equilibrium arise with a real return on money less than the relevant intertemporal marginal rate of substitution?
The class of general equilibrium models of Arrow (1964), Debreu (1959), McKenzie (1959), and others is an excellent starting point for the study of actual economies. On the positive side, this class of models can be used to address the standard macroeconomic concerns of inflation, growth, and unemployment, and also more general phenomena such as the objects and institutions of trade, the absence of insurance arrangements of some kinds, or the dispersion of consumption in a population. On the normative side, this class of models can be used to study stabilization policy and optimal monetary arrangements.
Contrary views are often expressed in professional conversations and in the literature. Indeed, the Arrow-Debreu model is often said to be operational only under such unrealistic assumptions as full information, complete markets, and no diversity. Here, however, an alternative view is argued. The Arrow-Debreu model can accommodate not only diversity in preferences and endowments but also private information, indivisibilities, spatial separation, limited communication, and limited commitment. That is, standard results on the existence of Pareto optimal allocations and on the existence and optimality of competitive equilibrium allocations can be shown to apply to a large class of environments with these elements. Further, stylized but suggestive models with these elements can be constructed and made operational so that Pareto optimal and/or competitive equilibrium allocations can be characterized.
The development of the microfoundation of macroeconomics has been slow going; very interesting, but slow going. I want to begin with an elaboration of why this subject is hard (for me). I will then discuss two lines of research that explore what happens when economies are modeled without a Walrasian auctioneer - search theory (Section 1) and the theory of bank runs (Section 2). Of course, this is only a piece of the literature that identifies itself as the microfoundations of macroeconomics. I will concentrate on the flavor of results, rather than on the technical problems of constructing tractable macro-oriented models within the micro rules of model construction.
Microanalysis
The fundamental theorem of welfare economics makes possible straightforward use of the competitive general equilibrium model for the analysis of distortions in the economy. Because the economy would be Pareto optimal otherwise, it is easy to see the welfare implications of altering one of the assumptions of the Arrow-Debreu model. Observing pollution, one changes the assumption that production decisions do not affect individual utilities directly. One can then analyze the utility implications of pollution (compared with a different production technology or a different production decision), calculate socially optimal production decisions (for different criteria and different accompanying income redistribution tools), and design price and tax policies to improve or optimize resource allocation. A parallel literature analyzes congestion.
Auctions are one of the oldest surviving classes of economic institutions. The first historical record of an auction is usually attributed to Herodotus, who reported a custom in Babylonia in which men bid for women to wed. Other observers have reported auctions throughout the ancient world - in Babylonia, Greece, the Roman Empire, China, and Japan.
As impressive as the historical longevity of auctions is the remarkable range of situations in which they are currently used. There are auctions for livestock, a commodity for which many close substitutes are available. There are also auctions for rare and unusual items like large diamonds, works of art, and other collectibles. Durables (e.g., used machinery), perishables (e.g., fresh fish), financial assets (e.g., U.S. Treasury bills), and supply and construction contracts are all commonly bought or sold at auction. The auction sales of unique items have suggested to some that auctions are a good vehicle for monopolists. But it is not only those in a strong market position who use auctions. There are also auction sales of the land, equipment, and supplies of bankrupt firms and farms. These show that auctions are used by sellers who are desperate for cash and willing to sell even at prices far below replacement cost.
Controlled experiments conducted by economists under laboratory conditions have a relatively short history. Only in the last ten years has laboratory experimentation in economics completed the transition from being a seldom-encountered curiosity to a well-established part of the economic literature. (The Journal of Economic Literature has this year initiated a separate bibliographic category for “Experimental Economic Methods.”) How this came to pass makes for an interesting episode in the history and sociology of science. In this chapter, however, we discuss the different uses to which laboratory experimentation is being put in economics.
I think that, loosely speaking, many of the experiments that have been conducted to date fall on an imaginary continuum somewhere between experiments associated with testing and modifying formal economic theories (which I shall call “Speaking to Theorists”), and those associated with having a direct input into the policy-making process (“Whispering into the Ears of Princes”). Somewhere in between lie experiments designed to collect data on interesting phenomena and important institutions, in the hope of detecting unanticipated regularities (“Searching for Facts”). Most experimental investigations contain elements from more than one of these categories.
In the following sections I briefly describe examples of each of these activities, and how they interact with and contribute to other parts of economic research. I like to think of economists who do experiments as being involved in three kinds of (overlapping) dialogues, which the examples are designed to illustrate. Material discussed under the heading "Speaking to Theorists" illustrates the kind of dialogue that can exist between experimenters and theorists, whereas the material in "Searching for Facts" illustrates the kind of dialogue that experimenters can engage in with one another. The section entitled "Whispering in the Ears of Princes" deals of course with the kind of dialogue that experimenters can have with policy makers.
The purpose of this survey is to review the development of the sequential strategic approach to the bargaining problem, and to explain why I believe that this theory may provide a foundation for further developments in other central areas of economic theory.
John Nash started his 1950 paper by defining the bargaining situation:
A two person bargaining situation involves two individuals who have the opportunity to collaborate for mutual benefit in more than one way.... The two individuals are highly rational,... each can accurately compare his desire for various things . . . they are equal in bargaining skill.
Given a bargaining situation, we look for a theoretical prediction of what agreement, if any, will be reached by the two parties.
I began with this clarification because of the existing confusion in some of the literature among the above problem and the following (nonexclusive) ethical questions: “What is a just agreement?” “What is a reasonable outcome for an arbitrator’s decision?” and “What agreement is optimal for society as a whole?” These questions differ from the current one mainly in that they allow derivation of an answer from a social welfare optimization. An a priori assumption that a solution to the bargaining problem satisfies collective rationality properties seems inappropriate.
In the last five years, economists have begun to apply the theory of games of incomplete information in extensive form to problems of industrial competition. As a result, we are beginning to get a theoretical handle on some aspects of the rich variety of behavior that marks real strategic interactions but that has previously resisted analysis. For example, the only theoretically consistent analyses of predatory pricing available five years ago indicated that such behavior was pointless and should be presumed to be rare; now we have several distinct models pointing in the opposite direction. These not only formalize and justify arguments for predation that had previously been put forward by business people, lawyers, and students of industrial practice; they also provide subtle new insights that call into question both prevailing public policy and legal standards and various suggestions for their reform. In a similar fashion, we now have models offering strategic, information-based explanations for such phenomena as price wars, the use of apparently uninformative advertising, limit pricing, patterns of implicit cooperation and collusion, the breakdown of bargaining and delays of agreement, the use of warranties and service contracts, the form of pricing chosen by oligopolists, the nature of contracts between suppliers and customers, and the adoption of various institutions for exchange: almost all of this was unavailable five years ago.