To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
As mentioned in Chapter 1, there just isn't enough room in a standard engineering curriculum to include much material of a historical nature. In any case, goes one common argument, looking backwards is a waste of time, particularly since constraints on circuit design in the IC era are considerably different from what they once were. Therefore, continues this reasoning, solutions that worked well in some past age are irrelevant now. That argument might be valid, but perhaps authors shouldn't prejudge the issue. This final chapter therefore closes the book by presenting a tiny (and nonuniform) sampling of circuits that represent important milestones in RF (actually, radio) design. This selection is somewhat skewed, of course, because it reflects the author's biases; conspicuously absent, for example, is the remarkable story of radar, mainly because it has been so well documented elsewhere. Circuits developed specifically for television are not included either. Rather, the focus is more on early consumer radio circuits, since the literature on this subject is scattered and scarce. There is thus a heavy representation by circuits of Armstrong, because he laid the foundations of modern communications circuitry.
ARMSTRONG
Armstrong was the first to publish a coherent explanation of vacuum tube operation. In “Some Recent Developments in the Audion Receiver,” published in 1915, he offers a plot of the V–I characteristics of the triode, something that the quantitatively impaired de Forest had never bothered to present.
The wireless telegraph is not difficult to understand. The ordinary telegraph is like a very long cat. You pull the tail in New York, and it meows in Los Angeles. The wireless is the same, only without the cat.
–Albert Einstein, 1938
A BRIEF, INCOMPLETE HISTORY OF WIRELESS SYSTEMS
Einstein's claim notwithstanding, modern wireless is difficult to understand, with or without a cat. For proof, one need only consider how the cell phone of Figure 2.1 differs from a crystal radio.
Modern wireless systems are the result of advances in information and control theory, signal processing, electromagnetic field theory and developments in circuit design – just to name a few of the relevant disciplines. Each of these topics deserves treatment in a separate textbook or three, and we will necessarily have to commit many errors of omission. We can aspire only to avoid serious errors of commission.
As always, we look to history to impose a semblance of order on these ideas.
THE CENOZOIC ERA
The transition from spark telegraphy to carrier radiotelephony took place over a period of about a decade. By the end of the First World War, spark's days were essentially over, and the few remaining spark stations would be decommissioned (and, in fact, their use outlawed) by the early 1920s. The superiority of carrier-based wireless ensured the dominance that continues to this day.
We've seen that RF circuits generally have many passive components. Successful design therefore depends critically on a detailed understanding of their characteristics. Since mainstream integrated circuit (IC) processes have evolved largely to satisfy the demands of digital electronics, the RF IC designer has been left with a limited palette of passive devices. For example, inductors larger than about 10 nH consume significant die area and have relatively poor Q (typically below 10) and low self-resonant frequency. Capacitors with high Q and low temperature coefficient are available, but tolerances are relatively loose (e.g., order of 20% or worse). Additionally, the most area-efficient capacitors also tend to have high loss and poor voltage coefficients. Resistors with low self-capacitance and temperature coefficient are hard to come by, and one must also occasionally contend with high voltage coefficients, loose tolerances, and a limited range of values.
In this chapter, we examine IC resistors, capacitors, and inductors (including bondwires, since they are often the best inductors available). Also, given the ubiquity of interconnect, we study its properties in detail since its parasitics at high frequencies can be quite important.
INTERCONNECT AT RADIO FREQUENCIES: SKIN EFFECT
At low frequencies, the properties of interconnect we care about most are resistivity, current-handling ability, and perhaps capacitance. As frequency increases, we find that inductance might become important. Furthermore, we invariably discover that the resistance increases owing to a phenomenon known as the skin effect.
The design of amplifiers at high frequencies involves more detailed considerations than at lower frequencies. One simply has to work harder to obtain the requisite performance when approaching the inherent limitations of the devices themselves. Additionally, the effect of ever-present parasitic capacitances and inductances can impose serious constraints on achievable performance.
At lower frequencies, the method of open-circuit time constants is a powerful intuitive aid in the design of high-bandwidth amplifiers. Unfortunately, by focusing on minimizing various RC products, it leads one to a relatively narrow set of options to improve bandwidth. For example, we may choose to distribute the gain among several stages or alter bias points, all in an effort to reduce effective resistances. These actions usually involve an increase in power or complexity, and at extremes of bandwidth such increases may be costly. In other cases, open-circuit time constants may erroneously predict that the desired goals are simply unattainable.
A hint that other options exist can be found by revisiting the assumptions underlying the method of open-circuit time constants. As we've seen, the method provides a good estimate for bandwidth only if the system is dominated by one pole. In other cases, it yields increasingly pessimistic estimates as the number of poles increases or their damping ratio increases. Additionally, open-circuit time constants can grossly underestimate bandwidth if there are zeros in the passband.
The field of radio frequency (RF) circuit design is currently enjoying a renaissance, driven in particular by the recent, and largely unanticipated, explosive growth in wireless telecommunications. Because this resurgence of interest in RF caught industry and academia by surprise, there has been a mad scramble to educate a new generation of RF engineers. However, in trying to synthesize the two traditions of “conventional” RF and lower-frequency IC design, one encounters a problem: “Traditional” RF engineers and analog IC designers often find communication with each other difficult because of their diverse backgrounds and the differences in the media in which they realize their circuits. Radio-frequency IC design, particularly in CMOS, is a different activity altogether from discrete RF design. This book is intended as both a link to the past and a pointer to the future.
The contents of this book derive from a set of notes used to teach a one-term advanced graduate course on RF IC design at Stanford University. The course was a follow-up to a low-frequency analog IC design class, and this book therefore assumes that the reader is intimately familiar with that subject, described in standard texts such as Analysis and Design of Analog Integrated Circuits by P. R. Gray and R. G. Meyer (Wiley, 1993). Some review material is provided, so that the practicing engineer with a few neurons surviving from undergraduate education will be able to dive in without too much disorientation.
There are two important regimes of operating frequency, distinguished by whether one may treat circuit elements as “lumped” or distributed. The fuzzy boundary between these two regimes concerns the ratio of the physical dimensions of the circuit relative to the shortest wavelength of interest. At high enough frequencies, the size of the circuit elements becomes comparable to the wavelengths, and one cannot employ with impunity intuition derived from lumped-circuit theory. Wires must then be treated as the transmission lines that they truly are, Kirchhoff's “laws” no longer hold generally, and identification of R, L, and C ceases to be obvious (or even possible).
Thanks to the small dimensions involved in ICs, though, it turns out that we can largely ignore transmission-line effects well into the gigahertz range, at least on-chip. So, in this text, we will focus primarily on lumped-parameter descriptions of our circuits. For the sake of completeness, however, we should be a little less cavalier (we'll still be cavalier, just less so) and spend some time talking about how and where one draws a boundary between lumped and distributed domains. In order to do this properly, we need to revisit (briefly) Maxwell's equations.
MAXWELL AND KIRCHHOFF
Many students (and many practicing engineers, unfortunately) forget that Kirchhoff's voltage and current “laws” are approximations that hold only in the lumped regime (which we have yet to define).
The sensitivity of communications systems is limited by noise. The broadest definition of noise as “everything except the desired signal” is most emphatically not what we will use here, however, because it does not separate, say, artificial noise sources (e.g., 60-Hz power-line hum) from more fundamental (and therefore irreducible) sources of noise that we discuss in this chapter.
That these fundamental noise sources exist was widely appreciated only after the invention of the vacuum tube amplifier, when engineers finally had access to enough gain to make these noise sources noticeable. It became obvious that simply cascading more amplifiers eventually produces no further improvement in sensitivity because a mysterious noise exists that is amplified along with the signal. In audio systems, this noise is recognizable as a continuous hiss while, in video, the noise manifests itself as the characteristic “snow” of analog TV systems.
The noise sources remained mysterious until H. Nyquist, J. B. Johnson and W. Schottky published a series of papers that explained where the noise comes from and how much of it to expect. We now turn to an examination of the noise sources they identified.
THERMAL NOISE
Johnson was the first to report careful measurements of noise in resistors, and his colleague Nyquist explained them as a consequence of Brownian motion: thermally agitated charge carriers in a conductor constitute a randomly varying current that gives rise to a random voltage (via Ohm's law).
This chapter focuses attention on those aspects of transistor behavior that are of immediate relevance to the RF circuit designer. Separation of first-order from higher-order phenomena is emphasized, so there are many instances when crude approximations are presented in the interest of developing insight. As a consequence, this review is intended as a supplement to – rather than a replacement for – traditional rigorous treatments of the subject. In particular, we must acknowledge that today's deepsubmicron MOSFET is so complex a device that simple equations cannot possibly provide anything other than first-order (maybe even zeroth-order) approximations to the truth. The philosophy underlying this chapter is to convey a simple story that will enable first-pass designs, which are then verified by simulators using much more sophisticated models. Qualitative insights developed with the aid of the zeroth-order models enable the designer to react appropriately to bad news from the simulator. We design with a simpler set of models than those used for verification.
With that declaration out of the way, we now turn to some history before launching into a series of derivations.
A LITTLE HISTORY
Attempts to create field-effect transistors (FETs) actually predate the development of bipolar devices by over twenty years. In fact, the first patent application for a FET-like transistor was filed in 1926 by Julius Lilienfeld, but he never constructed a working device. Before co-inventing the bipolar transistor, William Shockley also tried to modulate the conductivity of a semiconductor to create a field-effect transistor.
Since publication of the first edition of this book in 1998, RF CMOS has made a rapid transition to commercialization. Back then, the only notable examples of RF CMOS circuits were academic and industrial prototypes. No companies were then shipping RF products using this technology, and conference panel sessions openly questioned the suitability of CMOS for such applications – often concluding in the negative. Few universities offered an RF integrated circuit design class of any kind, and only one taught a course solely dedicated to CMOS RF circuit design. Hampering development was the lack of device models that properly accounted for noise and impedance at gigahertz frequencies. Measurements and models so conflicted with one another that controversies raged about whether deep submicron CMOS suffered from fundamental scaling problems that would forever prevent the attainment of good noise figures.
Today, the situation is quite different, with many companies now manufacturing RF circuits using CMOS technology and with universities around the world teaching at least something about CMOS as an RF technology. Noise figures below 1 dB at gigahertz frequencies have been demonstrated in practical circuits, and excellent RF device models are now available. That pace of growth has created a demand for an updated edition of this textbook.
By
Fanny S. Demers, The Johns Hopkins University and is Associate Professor at Carleton University, Ottawa, Canada.,
Michael Demers, PhD from The Johns Hopkins University and is Associate Professor at Carleton University, Ottawa, Canada.,
Sumru Altug, Professor of Economics at Koc¸ University in Istanbul, Turkey.
Investment decisions occupy a central role among the determinants of growth. As empirical studies such as Levine and Renelt (1992) have revealed, fixed investment as a share of gross domestic product (GDP) is the most robust explanatory variable of a country's growth. DeLong and Summers (1991) also provides evidence emphasizing the correlation of investment in equipment and machinery with growth. Investment is also the most variable component of GDP, and therefore an understanding of its determinants may shed light on the source of cyclical fluctuations. Policy-makers are typically concerned about the ultimate impact of alternative policy measures on investment and its variability. Several theories of investment have emerged since the 1960s in an attempt to explain the determinants of investment. The most notable of these have been the neoclassical model of investment, the cost-of-adjustment-Q-theory model, the time-to-build model, the irreversibility model under uncertainty and the fixed-cost (S, s) model of lumpy investment.
Beginning with the neoclassical model developed by Jorgenson and his collaborators (see for example, Hall and Jorgenson 1967; Jorgenson 1963), investment theory distinguishes between the actual capital stock and the desired or optimal capital stock, where the latter is determined by factors such as output and input prices, technology and interest rates. In the neoclassical model of investment, an exogenous partial adjustment mechanism is postulated to yield a gradual adjustment of the actual capital stock to its desired level as is observed in the data. An alternative way of obtaining a determinate rate of investment is to assume the existence of convex costs of adjustment, as has been proposed by Eisner and Strotz (1963), Gould (1967), Lucas (1976) and Treadway (1968). Abel (1979) and Hayashi (1982) have shown that the standard adjustment cost model leads to a Tobin's Q-theory of investment under perfect competition and a constant returns to scale production technology. A complementary explanation assumes that it takes time to build productive capacity. (See Altug 1989, 1993; Kydland and Prescott 1982.)
A number of authors have emphasised irreversibility and uncertainty as important factors underlying the gradual adjustment of the capital stock. The notion of irreversibility, which can be traced back toMarschak (1949) and subsequently to Arrow (1968), was initially applied in the context of environmental economics where economic decisions, such as the destruction of a rain forest to build a highway, often entail actions that cannot be ‘undone.’
This chapter discusses work which applies the methods of stochastic dynamic programming (SDP) to the explanation of consumption and saving behaviour. The emphasis is on the intertemporal consumption and saving choices of individual decision-makers, which I will normally label as ‘households’. There are at least two reasons why it is important to try to explain such choices: first, it is intrinsically interesting; and, second, it is useful as a means of understanding, and potentially forecasting, movements in aggregate consumption, and thus contributing to understanding and/or forecasts of aggregate economic fluctuations. The latter motivation needs no further justification, given the priority which policy-makers attach to trying to prevent fluctuations in economic activity. The former motivation – intrinsic interest – is less often stressed by economists, but it is hard to see why: it is surely worthwhile for humankind to improve its understanding of human behaviour, and economists, along with other social scientists, have much to contribute here.
The application of SDP to household consumption behaviour is very recent, with the first published papers appearing only at the end of the 1980s. Young though it is, this research programme has already changed significantly the way in which economists now analyse consumption choice, and has overturned a number of previously widely held views about consumption behaviour. Any research programme which achieves such outcomes so quickly would normally be judged a success, and in many respects this is an appropriate judgement here. The judgement needs to be qualified, however, on at least two counts. First, some of the ideas which the SDP research programme has overturned, although previously widely believed by mainstream economists, were never subscribed to by those working outside the mainstream. Non-mainstream economists might argue that the SDP programme has simply allowed the mainstream to catch up with their own thinking. Second, there is room for doubt that SDP methods really capture at all well the ways in which humans actually make decisions.
These issues are considered in the rest of this chapter. Section 2 reviews the development of economists’ thinking about consumption behaviour since the time of Keynes, and places the SDP programme in this longer term context. Section 3 looks in more detail at some of the most prominent contributions to the SDP research programme. Section 4 considers criticisms of the SDP programme, and looks at other possible approaches to modelling consumption behaviour.
This chapter studies the two main financial building blocks of simulation models: the consumption-based asset pricing model and the definition of asset classes. The aim is to discuss what different modelling choices imply for asset prices. For instance, what is the effect of different utility functions, investment technologies, monetary policies and leverage on risk premia and yield curves?
The emphasis is on surveying existing models and discussing the main mechanisms behind the results. I therefore choose to work with stylised facts and simple analytical pricing expressions. There are no simulations or advanced econometrics in this chapter. The following two examples should give the flavour. First, I use simple pricing expressions and scatter plots to show that the consumption-based asset pricing model cannot explain the cross-sectional variation of Sharpe ratios. Second, I discuss how the slope of the real yield curve is driven by the autocorrelation of consumption by studying explicit log-linear pricing formulas of just two assets: a one-period bond and a one-period forward contract.
The plan of the chapter is as follows. Section 2 deals with the consumption-based asset pricing model. It studies if the model is compatible with historical consumption and asset data. From a modelling perspective, the implicit question is: if my model could create a realistic consumption process and has defined realistic assets (for instance, levered equity), would it then predict reasonable asset returns? Section 3 deals with how assets are defined in simulation models and what that implies for pricing. I discuss yield curves (real and nominal), claims on consumption (oneperiod and multi-period), options and levered equity. Section 4 summarises the main findings. Technical details are found in a number of appendices (p. 444).
PROBLEMS WITH THE CONSUMPTION-BASED ASSET PRICING MODEL
This part of the chapter takes a hard look at the consumption-based asset pricing model since it is one of the building blocks in general equilibrium models. The approach is to derive simple analytical pricing expressions and to study stylised facts – with the aim of conveying the intuition for the results.
The first sections below look at earlier findings on the equity premium puzzle and the risk-free rate puzzle and study if they are stable across different samples (see the surveys of Bossaert 2002; Campbell 2001; Cochrane 2001; and Smith and Wickens 2002).
By
Cristina Arellano, PhD economics graduate student at Duke University.,
Enrique G. Mendoza, Professor of International Economics and Finance at the University
The severe financial and economic crisis that hit Mexico after the devaluation of the peso in December 1994, and the unprecedented ‘Tequila effect’ by which Mexico's financial woes ‘infected’ emerging markets world-wide were a harbinger of a period of intense turbulence in international capital markets. Seven years later, in December 2001, a major crisis broke out in Argentina with an explosive combination of sovereign default, massive currency devaluation and collapse of economic activity. In the seven years separating the Mexican and Argentine crises, similar crises engulfed nearly all of the so-called ‘emerging markets,’ including Hong Kong, Korea, Indonesia, Malaysia, Thailand, Russia, Chile, Colombia, Ecuador, Brazil and Turkey. Interestingly, devaluation itself proved not to be a prerequisite for these crises, as the experiences of Argentina in 1995 and Hong Kong in 1997 showed. ‘Contagion effects’ similar to the ‘Tequila effect’ were also typical, as crises spread quickly to countries with no apparent economic linkages to countries in crisis. A favourite example is the correction in US equity prices in the autumn of 1998 triggered by the Russian default. The systemic nature of this correction forced the US Federal Reserve to lower interest rates and coordinate the orderly collapse of hedge fund Long Term Capital Management.
Emerging markets crises are characterised by a set of striking empirical regularities that Calvo (1998) labelled the ‘Sudden Stop’ phenomenon. These empirical regularities include: (a) a sudden loss of access to international capital markets reflected in a collapse of capital inflows, (b) a large reversal of the current account deficit, (c) collapses of domestic production and aggregate demand, and (d) sharp corrections in asset prices and in the prices of non-traded goods relative to traded goods. Figures 7.1–7.3 illustrate some of these stylised facts for Argentina, Korea, Mexico, Russia and Turkey. Figure 7.1 shows recent time series data for each country's current account as a share of GDP. Sudden Stops are displayed in these plots as sudden, large swings of the current account that in most cases exceeded five percentage points of GDP. Figure 7.2 shows data on consumption growth as an indicator of real economic activity. These plots show that Sudden Stops are associated with a collapse in the real sector of the economy.
In recent years, dynamic stochastic general equilibrium (DSGE) models of monetary economies have focused on the role of nominal rigidities in affecting the economy's adjustment to monetary policy and non-policy disturbances. While these rigidities appear important for understanding the impact nominal shocks have on such real variables as output and employment, models with only nominal rigidities have been unable to match the responses to monetary disturbances that have been estimated in the data. Typically, empirical studies have concluded that monetary shocks generate large and persistent real responses that display a hump shape. After a positive money shock, for example, output rises over several quarters and then declines. Christiano, Eichenbaum and Evans (1999) document this effect and provide an extensive discussion of the empirical evidence on the effects of monetary shocks. Sims (1992) finds large, hump-shaped responses of real output to monetary shocks in several OECD countries. Inflation also displays a hump-shaped response, although inflation is usually found to respond more slowly than output to monetary shocks.
The ‘stylised facts’ emphasised by Christiano, Eichenbaum and Evans, by Sims, and by others are illustrated in figure 9.1, which shows estimated impulse responses of output and inflation following a shock to the growth rate of money. These responses were obtained from a three-variable VAR (output, inflation, and money growth) estimated using US quarterly data for 1965–2001. Output is real GDP, inflation is measured by the Consumer Price Index, and M2 is the aggregate used to measure money. The real persistence and inflation inertia seen in figure 9.1 has been hard for models based on nominal rigidities to match. As Dotsey and King (2001) have expressed it, ‘modern optimizing sticky price models have displayed a chronic inability to generate large and persistent real responses to monetary shock’.
In order to capture at least some of the real persistence seen in empirical studies, models based on nominal rigidity generally must assume a high degree of price stickiness. For example, it is common to assume that individual prices remain fixed on average for as much as nine months. Micro data on individual prices, however, suggests that prices typically change more frequently than this. Consequently, a number of researchers have recently argued that simply adding nominal rigidities to an otherwise standard DSGE model is not sufficient to match the persistence observed in the data.
By
Matthew B. Canzoneri, Professor of Economics at Georgetown University since 1985,
Robert E. Cumby, Professor of Economics in the School of Foreign Service of Georgetown University.,
Behzad T. Diba, Professor of Economics at Georgetown University
A New Neo-classical Synthesis (NNS) is merging three traditions that have dominated macroeconomic modelling for the last thirty years. In the 1970s, Sargent andWallace (1975) and others added rational expectations to the IS-LM models that were then being used to evaluate monetary policy; somewhat later, Calvo (1983) and Taylor (1980) introduced richer dynamic specifications for the nominal rigidities that were assumed in some of those models. In the 1980s, Kydland and Prescott (1982) and others introduced the Real Business Cycle (RBC) model, which sought to explain business cycle regularities in a framework with maximising agents, perfect competition, and complete wage/price flexibility.
The NNS reintroduces nominal rigidities and the demand determination of employment and output. Monopolistically competitive wageand price-setters replace the RBC model's perfectly competitive wage- and price-takers; monopoly markups provide the rationale for suppliers to expand in response to an increase in demand; and the Dixit and Stiglitz (1977) framework – when combined with complete sharing of consumption risks – allows the high degree of aggregation that has been a hallmark of macroeconomic modelling.
In this chapter, we present a simple model that can be used to illustrate elements of the NNS and recent developments in the macroeconomic stabilisation literature. We do not attempt to survey this rapidly growing literature. Instead, we focus on a set of papers that are key to a question that is currently being hotly debated: is price stability a good strategy for macroeconomic stabilisation? If so, some of the generally accepted tradeoffs in modern central banking would seem to evaporate. For example, inflation (or price-level) targeting need not be seen as a choice that excludes Keynesian stabilisation, and it would be unnecessary to give price stability such primacy in the statutes of the new central bank in Europe.
In section 2, we present our model and discuss some fundamental characteristics of the NNS. Our model is simpler than that which appears in much of the literature because we have replaced the dynamic Calvo and Taylor specifications of nominal rigidity with the assumption that some wages and/or prices are set one period in advance. This allows us to derive closed form equilibrium solutions for a class of utility functions and assumptions about the distribution of macroeconomic shocks.
By
Sumru Altug, Professor of Economics at Koç University in Istanbul, Turkey.,
Jagjit S. Chadha, Professor of Economics at the University of St Andrews.,
Charles Nolan, University of Strathclyde and Birkbeck College, University of London
The aim of this volume is simple: to demonstrate how quantitative general equilibrium theory can be fruitfully applied to a variety of specific macroeconomic and monetary issues. There is, by now, no shortage of high-quality advanced macroeconomic and monetary economics texts available – indeed two of the contributors to the present volume (Stephen Turnovsky and Carl Walsh) have recently written first-rate graduate texts in just these areas. However, there is often rarely space in a text book to develop models much past their basic setup, and there is similarly little scope for a detailed discussion of a model's policy implications. This volume, then, aims to bridge some of that gap.
To that end, we asked leading researchers in various areas to explain what they were up to, and where they thought the literature was headed. The result, we think, bears testimony to the richness of aggregate economic modelling that has grown out of the real business cycle (RBC) approach to growth and business cycle fluctuations. We treat this book as both a mark of the tremendous progress in this field and a staging post to even further progress subsequently.
We would like to thank colleagues who have taken the trouble to read parts of this book and provided useful comments: Anthony Garratt, Sean Holly, Campbell Leith, Paul Levine, David Miles, Ed Nelson, Sheilagh Ogilvie, Argia Sbordone, Frank Smets, Alan Sutherland, Peter Tinsley, Marcelo Veracierto, Simon Wren-Lewis, Mike Wickens. Ashwin Rattan and Chris Harrison at Cambridge University Press have provided constant support. Finally, we would like to thank Anne Mason and Gill Smith without whose efficiency this book would not have been so expertly completed.
By
Jagjit S. Chadha, Professor of Economics at the University of St Andrews.,
Charles Nolan, University of Strathclyde and Birkbeck College, University of London.
In this chapter we consider the interaction of monetary policy with aggregative fiscal policy. By ‘aggregative’ we mean that our focus is primarily on the effects of debts and deficits in the presence of lump-sum taxation. We shall, in particular, be concerned with the ways monetary and fiscal policies may need to be coordinated to ensure ‘good’ macroeconomic outcomes. To that end, we shall be largely occupied with two issues: (a) the fundamental linkages between the government's budget constraint and the setting of interest rates and (b) on the stabilisation issues thrown up by systematic fiscal and monetary policy over the economic cycle.
More specifically, we study how monetary policy may be influenced by doubts over the wider fiscal solvency of the public sector. In an important contribution Sargent and Wallace (1981) argued that the money stock and taxes were substitutes in the backing of government debt. This discussion brings to the fore the fact that monetary and fiscal policies are linked via a budget constraint. However, many countries have recently delegated control of monetary policy to an independent monetary authority, partly in response to the kind of concerns raised by Sargent and Wallace. There now seems to be some concern that monetary and fiscal policy may actually not be well coordinated under such an institutional structure. The issue seems less to do with solvency, and more to do with aggregate demand management over the economic cycle: if monetary policy is too ‘rigid’, then fiscal policy may need to compensate by being more ‘flexible’. So the second issue we discuss is how monetary and fiscal policies might be set jointly in order to smooth the economic cycle.
In the next section we set out the contents of the chapter in some more detail.
Key themes
Following Sargent and Wallace (1975, 1981) macroeconomists generally argued that there were two key requirements for monetary policy to retain control over nominal magnitudes. First, monetary policy ought to be characterised by control over the money stock as opposed to an interest rate peg. However, since fiscal policy may hamper the effective control of the money supply by requiring excessive seigniorage revenue – the tax revenue generated from money creation – this is not, in general, a sufficient condition.