To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Abstract: Over the last decade increasing use has been made of individual household data to analyse the gains and losses from tax reform. Much attention has been paid to the econometric estimation of models of household responses to taxes. But these models yield valid estimates of the welfare consequences of tax changes only when the implied preference orderings are well behaved. This chapter discusses the nature of such conditions in detail. Where there are nonlinearities in the budget constraint, then two sets of primal and dual conditions must be satisfied. The analysis of these conditions yields suggestions for the specification of behavioural models and the use of individual-specific information in the observed data.
Introduction
No subject could be more appropriate or topical for the first World Congress of the Society to be held in the United States than the empirical analysis of tax reform. In May 1985 the president sent his proposals for tax reform to Congress in order “to change our present tax system into a model of fairness, simplicity, efficiency, and compassion, to remove the obstacles to growth and unlock the door to a future of unparalleled innovation and achievement” (U.S. 1985). If enacted, these proposals would make a significant difference to the living standards of many families. The average reduction in taxes as a proportion of income is estimated at 0.6 percent. But only 58.1 percent of families would experience a reduction in taxes (U.S. 1985, Chart 13). It is clear that there are substantial numbers of gainers and losers.
In a frequently quoted remark, Joseph Schumpeter characterized the Walrasian system of general equilibrium as the “Magna Carta” of economics. What is less well remembered is that Schumpeter also believed that the Walrasian system could never be more than a broad organizational framework for thinking through the implications of interdependence between markets and economic actors. In Schumpeter's view, operationalizing Walras in the sense of providing a usable tool for policymakers and planners to evaluate the implications of different courses of action was a utopian pipedream.
Despite Schumpeter's cautions, operationalizing Walras has nonetheless been a preoccupation of economists for several decades. The debates in the 1930s on the feasibility of centralized calculation of a Pareto-optimal allocation of resources in a Socialist economy involving von Mises, Hayek, Robbins, and Lange (and begun earlier by Barone) were implicitly debates on the operational content of the Walrasian system. The subsequent development by Leontief and others of input-output analysis was a conscious attempt to take Walras onto an empirical and ultimately policy-relevant plane. The linear and nonlinear programming planning models in the 1950s and 1960s were viewed at the time very much as an improvement on input-output techniques through the introduction of optimization into Leontief's work. And today, with the use of applied general equilibrium models for policy evaluation, the same idea of operationalizing Walras is driving a new generation of economists forward.
Abstract In this chapter I discuss some empirical evidence that reflects on the validity of life-cycle models of consumer behavior. I make no attempt to provide a survey but rather focus on a number of specific issues that seem to me to be important or that seem to have been unreasonably neglected in the current literature. The chapter has three sections. The first looks at the stylized facts. In particular, I look at the nonparametric evidence with emphasis on both consumption and labor supply and the interaction between them. I present some aggregate time series data from the United States; these suggest that simple representative agent models of the life cycle are unlikely to be very helpful, at least without substantial modification. It is particularly hard to come up with one explanation that is consistent both with these data and the wealth of evidence on consumption and labor supply from microeconomic information. However, I argue that the main problem here is not so much the theory as the aggregation; except under extremely implausible assumptions, including the supposition that consumers are immortal, life-cycle theory does not predict the sort of aggregate relationships that are implied by representative agent models. In particular, it makes little sense to look for a simple relationship between the real rate of interest and the rate of growth of aggregate consumption. Section 2 is concerned with the estimation of parametric models on aggregate time series data. I review briefly the “excess-sensitivity” issue as well as some of the econometric problems associated with the nonstationarity of the income and consumption time series. My main point, however, is to argue that there are interactions between the time series representation of income and the life-cycle model that have not been adequately recognized in the literature.
Abstract This chapter is devoted to the analysis of solutions of linear rational-expectations models. Successively introducing various types of expectations (perfect, naive, adaptive, and rational) in the Muth model, the final reduced forms and the linear stationary solutions are compared. The main solution techniques for rational-expectations models are reviewed on the Cagan model. The “non-uniqueness problem” is also discussed. The reduced form of a very general linear model is given and the linear stationary solutions are parametrically described. The parameters have a simple interpretation and allow for statistical applications. Finally, a generalization to multivariate rational-expectations models is given. If no invertibility conditions are imposed on the structural coefficient matrices, the solution techniques used in the univariate case become insufficient. A method is suggested to obtain the general solution of a multivariate model and to characterize the dimension of the solutions space.
Introduction
The problem of modeling the mechanism by which economic agents form their expectations is fundamental in macroeconomic theory. In many models, it is essential to include expectations of future variables. However, such expectations are often unobservable. Therefore, assumptions on their formation are needed to complete the specification.
The rational-expectation hypothesis was introduced in a seminal paper by Muth (1961). Several years later the assumption was incorporated in many macromodels (e.g., Sargent and Wallace 1975, 1976; Lucas 1976; Taylor 1979). In these models, expectations are optimal predictions given all the available information. Rational expectations are thus based on an information set that may be chosen by the model builder.
Abstract: This chapter surveys recent empirical work on tests for liquidity constraints. The focus of the survey is on the tests based on the Euler equation. Main conclusions are the following. (1) The available evidence indicates that for a significant fraction of households in the population consumption is affected in a way predicted by credit rationing and differential borrowing and lending rates. (2) However, the available evidence does not give answers to such important questions as the response of consumption to permanent and temporary income changes and the validity of the Ricardian equivalence theorem. (3) Future research should examine the cause, not the existence, of liquidity constraints.
Introduction
The issue of liquidity constraints comes up in several areas of economics. The main ingredient in modern theories of business cycles is the consumer who executes intertemporal optimization through trading in perfectly competitive asset markets. Traditionally, the life cycle-permanent income hypothesis has been the label for such consumer behavior. Some authors have argued that the observed comovements of consumption and income (or the lack thereof) can best be explained by examining the role of liquidity constraints as the additional constraint in the consumers’ decision problem. The notion that consumers are unable to borrow as they desire is also used to argue against the Ricardian doctrine of the equivalence of taxes and deficits. In the literature on implicit labor contracts, the assumption is often made that workers are unable to borrow against future earnings. Liquidity constraints have even been used in some instances as an excuse to focus on static single-period analyses.
Four methodological prescriptions for empirical research
I do not wish to address “methodology” with a capital M in the sense of the meaning of life, of probability, and the definitions of truth, understanding, and progress. If we do not share roughly common notions and concepts from the outset, nothing I can write in the space of this essay will help resolve that state. Instead, given that the primary objective of econometric modelling is to provide an “explanation” for actual economic behaviour (which can include descriptive and predictive aims as well), I seek to communicate the broad framework of my approach and some of the principles and practical procedures it entails.
Over a period of about 20 years of analysing empirical phenomena, I have noted down four golden prescriptions for those who seek to study data:
I. THINK BRILLIANTLY. This helps greatly, especially if you can think of the right answer at the start of a study! Then econometrics is only needed to confirm your brilliance, calibrate the model, and demonstrate that it indeed passes all the tests.
II. BE INFINITELY CREATIVE. Be assured that this is an almost perfect substitute for brilliant thinking, enabling one to invent truly excellent models en route and so achieve essentially the same end state.
In this chapter we study an economy in which there are two technologies for making payments. The first is currency; the second, bank drafts drawn on interest-bearing demand deposits. The interest-bearing asset does not dominate the noninterest-bearing currency because there is a fixed recordkeeping cost incurred whenever a bank draft is used as the means of payment. The steady-state equilibrium is characterized. It is found that the value of the good or (more precisely) package of goods purchased at a given location determines which means of payment is used. Bank drafts are used for large purchases and currency for small purchases.
In the environment studied, the highly centralized Arrow–Debreu competitive equilibrium is impractical, because the number of date-, event-, and location-contingent commodities is so large that the resources required for information collection and processing would be prohibitive. In this sense we follow Brunner and Meltzer (1971) and consider as the chief role of money economizing on costly information collection and processing.
The approach is close in spirit to that of Townsend (1980), who views the payment system as a communication system. It differs in that no effort is made to find the best arrangement. The arrangement studied, however, is sufficiently explicit that one can calibrate the model and then examine the costs and benefits associated with modifying the scheme – say, by imposing reserve requirements or interest-rate ceilings. Upper bounds for the gains that can be realized from alternative systems can be computed.
The contents of this volume comprise the proceedings of a conference held at the IC2 Institute at the University of Texas at Austin on May 23–24, 1985. The conference title was “New Approaches to Monetary Economics,” and it was organized to bring together presentations of some of the particularly innovative new research that recently has been under way in the field of monetary economics. Much of this research develops and applies recently initiated approaches to modeling financial intermediation, aggregate fluctuations, monetary aggregation, and transactions motivated monetary equilibrium. We believe that this conference included pathbreaking research and revealed some fundamental trends in the direction in which monetary economics research is beginning to move.
The conference that produced this proceedings volume is the second in a new conference series, called International Symposia in Economic Theory and Econometrics. The symposia in the series are sponsored by the IC2 Institute at the University of Texas at Austin and are cosponsored by the RGK Foundation. This second conference also was cosponsored by the Federal Reserve Bank of Dallas and by the Department of Economics and the Center for Statistical Sciences at the University of Texas at Austin. The first conference in the series was co-organized by William Barnett and Ronald Gallant, who also co-edited the proceedings volume. That volume has appeared as the volume 30, October/November 1985 edition of the Journal of Econometrics.
The objective of this paper is to examine the impact of money on production for the individual financial firm, and to determine whether the monetary goods used can be aggregated on the supply side. Monetary goods are liquid financial assets and liabilities. They include cash as well as demand and time deposits. The financial firm is a profit-maximizing intermediary between borrowers and lenders. The technology of the financial firm includes quantities of monetary goods, other financial goods, and physical goods such as labor and materials. Financial firms are able to set interest rates on monetary goods, and so are not necessarily price takers in such markets.
A test procedure is developed to determine whether monetary goods are separable from nonmonetary goods in production. The test is general. It imposes no functional form restriction on money, and no restriction on which goods can be contained in money. Although the application is to firm data, the test can be applied at the aggregate level. Linear homogeneity of the money index is not required, although it may be imposed by data restrictions.
The financial firm operates to maximize variable profit – revenue less variable cost. The resulting profit function depends on the prices of nonmonetary goods and the quantities of monetary goods. If a monetary index exists at the level of the firm, then marginal rates of substitution or transformation between nonmonetary goods do not depend on quantities of monetary goods.
There is increasing recognition that Lucas's (1976) critique of econometric policy evaluation, at least under its usual interpretation, is logically flawed. The point has been forcefully made recently by Sargent (1984) and by Cooley, Leroy, and Rahman (1984) (henceforth referred to as CLR), as well as in my own paper (1982). The problem is that if the parameters of the policy “rule” are subject to change, as they must be if it makes sense to evaluate changes in them, then the public must recognize this fact and have a probability distribution over the parameters of the rule. But then these parameters are themselves policy variables, taking on a time series of values drawn from some probability law. Predicting how the economy will behave if we set the parameters of the rule at some value and keep them there is logically equivalent to predicting the behavior of the economy conditional on a certain path of a policy variable. Yet this is just the kind of exercise that Lucas claimed to be meaningless.
It is also evident that the methods of policy evaluation that Lucas criticized are still in wide use nine years after the appearance of his paper. During discussions of monetary and fiscal policy, statistical models prepared by the Congressional Budget Office, the Federal Reserve Board, numerous other agencies, and by private entities are used to prepare predictions of the likely future path of the economy, conditional on various possible paths for policy variables.
In recent decades there has been a resurgence of interest in index numbers resulting from discoveries that the properties of index numbers can be directly related to the properties of the underlying aggregator functions that they represent. The underlying functions – production functions, utility functions, etc. – are the building blocks of economic theory, and the study of relationships between these functions and index number formulas has been referred to by Samuelson and Swamy (1974) as the economic theory of index numbers.
Introduction
The use of economic index number theory was introduced into monetary theory by Barnett (1980a, 1981a). His merger of economic index number theory with monetary theory was based upon the use of Diewert's approach to producing “superlative” approximations to the exact aggregates from consumer demand theory. As a result, Barnett's approach produces a Diewert-superlative measure of the monetary service flow perceived to be received by consumers from their monetary asset portfolio. However, aggregation and index number theory are highly developed in production theory as well as in consumer demand theory. Substantial literatures exist on aggregation over factor inputs demanded by firms, aggregation over multiple product outputs produced by firms, and aggregation over individual firms and consumers. In addition, substantial literatures exist on exact measurement of value added by firms and of technical change by firms. All of these literatures are potentially relevant to closing a cleared money market in an exact aggregation-theoretic monetary aggregate.
A general equilibrium model of an economy is presented where people hold money rather than bonds in order to economize on transaction costs. It is not optimal for individuals to instantaneously adjust their money holdings when new information arrives. This (endogenous) delayed response to new information generates a response to a new monetary policy which is quite different from that of standard flexible price models of monetary equilibrium. Though all goods markets instantaneously clear, the transaction cost causes delayed responses in nominal variables to a change in monetary policy. This in turn causes real variables to respond to the new monetary policy.
Earlier work by Grossman and Weiss (1983), Grossman (1982), and Rotemberg (1984) have considered models of the above type where individuals hold money for an exogenously fixed amount of time – their “payment period.” As in the model to be developed here, these models assume that goods can be bought only with cash. However, unlike what we will assume here, individuals can exchange bonds for cash only on the exogenously fixed “paydates” which occur at the beginning and end of their payment periods. Thus an individual's money holding period is exogenously given and insensitive to the nominal interest rate. In such models, when there is an unanticipated increase in the money supply, people can be induced to hold the new money only by a large fall in the real rate of interest.
Abstract: We characterize the role of a central bank as a mechanism designer for risk-sharing across banks that are subject to privately observed “liquidity shocks.” The optimal mechanism involves borrowing/lending from a “discount window.” The optimal discount rate and the induced distortions in holdings of liquid assets suggest a rationale for subsidized lending and reserve requirements on the observable part of liquid asset holdings.
Introduction
Several recent papers have examined the micro-theoretic foundations for a theory of financial intermediation. The role of intermediaries as agents who provide delegated monitoring services has been developed in Leland and Pyle (1977) and Diamond (1984). More recently, Bryant (1980) and Diamond and Dybvig (1983) have considered issues pertaining to the optimal form of intermediary (deposit) contracts. They examine intertemporal models in which depositors are subject to privately observed preference shocks and the returns to investments depend on their time to maturity (liquidity). Within this framework, Bryant, Diamond-Dybvig and Jacklin (1986) have demonstrated the superiority of deposit contracts over Walrasian (mutual fund) trading mechanisms in providing agents with insurance for risks connected with preference shocks.
The work on banking contracts has also served to focus attention on problems of coordination across agents who have private information on (risky) investments undertaken by the depository intermediaries or mutual funds. The pioneering study of Bryant (1980) considered the instabilities (panics) and imperfect risk-sharing that would arise if bank depositors (with fixed commitment contracts) make earlier withdrawals based on information about asset returns.
This chapter considers the determination of interest rates and prices in a simple intertemporal general equilibrium framework. The resulting theory is used to examine the short- and long-run consequences of a policy of credit expansion through inside money creation.
The intertemporal equilibrium framework used here combines the demographic structure of an overlapping-generations model with a cash-in-advance constraint. The cash-in-advance constraint allows valued fiat money to co-exist in equilibrium with debt, yielding a positive nominal interest rate. The specific structure considered here allows a traditional account of the “liquidity effect” of open market operations upon the nominal interest rate in the short run to be rigorously connected with a longrun equilibrium analysis. The absence of such a theoretical connection has led to a certain amount of confusion in the literature about whether or not the liquidity effect of credit expansion, generally agreed to exist in the short run, can be maintained in the long run. It is shown that for the model economy considered here, open market operations can keep both the nominal and the real rate of interest low forever; and, whereas a lower interest rate is achieved in the short run only at the cost of a rise in the price level, in the long run different interest rates may be equally compatible with price-level stability.