To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Most circuit analysis proceeds with the assumptions of linearity and time invariance. Violations of those assumptions, if considered at all, are usually treated as undesirable. However, the high performance of modern communications equipment actually depends critically on the presence of at least one element that fails to satisfy linear time invariance: the mixer. We will see shortly that mixers are still ideally linear but depend fundamentally on a purposeful violation of time invariance. As noted in Chapter 1, the superheterodyne receiver uses a mixer to perform an important frequency translation of signals. Armstrong's invention has been the dominant architecture for 70 years because this frequency translation solves many problems in one fell swoop (see Figure 13.1).
In this architecture, the mixer translates an incoming RF signal to a lower frequency, known as the intermediate frequency (IF). Although Armstrong originally sought this frequency lowering simply to make it easier to obtain the requisite gain, other significant advantages accrue as well. As one example, tuning is now accomplished by varying the frequency of a local oscillator, rather than by varying the center frequency of a multipole bandpass filter. Thus, instead of adjusting several LC networks in tandem to tune to a desired signal, one simply varies a single LC combination to change the frequency of a local oscillator (LO). The intermediate frequency stages can then use fixed bandpass filters.
Integrated circuit engineers have the luxury of taking for granted that the incremental cost of a transistor is essentially zero, and this has led to the high-device-count circuits that are common today. Of course, this situation is a relatively recent development; during most of the history of electronics, the economics of circuit design were the inverse of what they are today. It really wasn't all that long ago when an engineer was forced by the relatively high cost of active devices to try to get blood (or at least rectification) from a stone. And it is indeed remarkable just how much performance radio pioneers were able to squeeze out of just a handful of components. For example, we'll see how American radio genius Edwin Armstrong devised circuits in the early 1920s that trade log of gain for bandwidth, contrary to the conventional wisdom that gain and bandwidth should trade off more or less directly. And we'll see that at the same time Armstrong was developing those circuits, self-taught Soviet radio engineer Oleg Losev was experimenting with blue LEDs and constructing completely solid-state radios that functioned up to 5 MHz, a quarter century before the transistor was invented.
These fascinating stories are rarely told because they tend to fall into the cracks between history and engineering curricula. Somebody ought to tell these stories, though, since in so doing, many commonly asked questions (“why don't they do it this way?”) are answered.
A solid understanding of feedback is critical to good circuit design, yet many practicing engineers have at best a tenuous grasp of the subject. This chapter is an overview of the foundations of classical control theory – that is, the study of feedback in single-input, single-output, time-invariant, linear continuous-time systems. We'll see how to apply this knowledge to the design of oscillators, highly linear broadband amplifiers, and phase-locked loops, among other examples. We'll also see how to extend our design intuition to include many nonlinear systems of practical interest.
As usual, we'll start with a little history to put this subject in its proper context.
A BRIEF HISTORY OF MODERN FEEDBACK
Although application of feedback concepts is very ancient (Og annoy tiger, tiger eat Og), mathematical treatments of the subject are a recent development. Maxwell himself offered the first detailed stability analyses, in a paper on the stability of the rings of Saturn (for which he won his first mathematical prize), and a later one on the stability of speed-controlled steam engines.
The first conscious application of feedback principles in electronics was apparently by rocket pioneer Robert Goddard in 1912, in a vacuum tube oscillator that employed positive feedback. As far as is known, however, his patent application was his only writing on the subject (he was sort of preoccupied with that rocketry thing, after all), and his contemporaries were largely ignorant of his work in this field.
As mentioned in Chapter 1, there just isn't enough room in a standard engineering curriculum to include much material of a historical nature. In any case, goes one common argument, looking backwards is a waste of time, particularly since constraints on circuit design in the IC era are considerably different from what they once were. Therefore, continues this reasoning, solutions that worked well in some past age are irrelevant now. That argument might be valid, but perhaps authors shouldn't prejudge the issue. This final chapter therefore closes the book by presenting a tiny (and nonuniform) sampling of circuits that represent important milestones in RF (actually, radio) design. This selection is somewhat skewed, of course, because it reflects the author's biases; conspicuously absent, for example, is the remarkable story of radar, mainly because it has been so well documented elsewhere. Circuits developed specifically for television are not included either. Rather, the focus is more on early consumer radio circuits, since the literature on this subject is scattered and scarce. There is thus a heavy representation by circuits of Armstrong, because he laid the foundations of modern communications circuitry.
ARMSTRONG
Armstrong was the first to publish a coherent explanation of vacuum tube operation. In “Some Recent Developments in the Audion Receiver,” published in 1915, he offers a plot of the V–I characteristics of the triode, something that the quantitatively impaired de Forest had never bothered to present.
The wireless telegraph is not difficult to understand. The ordinary telegraph is like a very long cat. You pull the tail in New York, and it meows in Los Angeles. The wireless is the same, only without the cat.
–Albert Einstein, 1938
A BRIEF, INCOMPLETE HISTORY OF WIRELESS SYSTEMS
Einstein's claim notwithstanding, modern wireless is difficult to understand, with or without a cat. For proof, one need only consider how the cell phone of Figure 2.1 differs from a crystal radio.
Modern wireless systems are the result of advances in information and control theory, signal processing, electromagnetic field theory and developments in circuit design – just to name a few of the relevant disciplines. Each of these topics deserves treatment in a separate textbook or three, and we will necessarily have to commit many errors of omission. We can aspire only to avoid serious errors of commission.
As always, we look to history to impose a semblance of order on these ideas.
THE CENOZOIC ERA
The transition from spark telegraphy to carrier radiotelephony took place over a period of about a decade. By the end of the First World War, spark's days were essentially over, and the few remaining spark stations would be decommissioned (and, in fact, their use outlawed) by the early 1920s. The superiority of carrier-based wireless ensured the dominance that continues to this day.
We've seen that RF circuits generally have many passive components. Successful design therefore depends critically on a detailed understanding of their characteristics. Since mainstream integrated circuit (IC) processes have evolved largely to satisfy the demands of digital electronics, the RF IC designer has been left with a limited palette of passive devices. For example, inductors larger than about 10 nH consume significant die area and have relatively poor Q (typically below 10) and low self-resonant frequency. Capacitors with high Q and low temperature coefficient are available, but tolerances are relatively loose (e.g., order of 20% or worse). Additionally, the most area-efficient capacitors also tend to have high loss and poor voltage coefficients. Resistors with low self-capacitance and temperature coefficient are hard to come by, and one must also occasionally contend with high voltage coefficients, loose tolerances, and a limited range of values.
In this chapter, we examine IC resistors, capacitors, and inductors (including bondwires, since they are often the best inductors available). Also, given the ubiquity of interconnect, we study its properties in detail since its parasitics at high frequencies can be quite important.
INTERCONNECT AT RADIO FREQUENCIES: SKIN EFFECT
At low frequencies, the properties of interconnect we care about most are resistivity, current-handling ability, and perhaps capacitance. As frequency increases, we find that inductance might become important. Furthermore, we invariably discover that the resistance increases owing to a phenomenon known as the skin effect.
The design of amplifiers at high frequencies involves more detailed considerations than at lower frequencies. One simply has to work harder to obtain the requisite performance when approaching the inherent limitations of the devices themselves. Additionally, the effect of ever-present parasitic capacitances and inductances can impose serious constraints on achievable performance.
At lower frequencies, the method of open-circuit time constants is a powerful intuitive aid in the design of high-bandwidth amplifiers. Unfortunately, by focusing on minimizing various RC products, it leads one to a relatively narrow set of options to improve bandwidth. For example, we may choose to distribute the gain among several stages or alter bias points, all in an effort to reduce effective resistances. These actions usually involve an increase in power or complexity, and at extremes of bandwidth such increases may be costly. In other cases, open-circuit time constants may erroneously predict that the desired goals are simply unattainable.
A hint that other options exist can be found by revisiting the assumptions underlying the method of open-circuit time constants. As we've seen, the method provides a good estimate for bandwidth only if the system is dominated by one pole. In other cases, it yields increasingly pessimistic estimates as the number of poles increases or their damping ratio increases. Additionally, open-circuit time constants can grossly underestimate bandwidth if there are zeros in the passband.
The field of radio frequency (RF) circuit design is currently enjoying a renaissance, driven in particular by the recent, and largely unanticipated, explosive growth in wireless telecommunications. Because this resurgence of interest in RF caught industry and academia by surprise, there has been a mad scramble to educate a new generation of RF engineers. However, in trying to synthesize the two traditions of “conventional” RF and lower-frequency IC design, one encounters a problem: “Traditional” RF engineers and analog IC designers often find communication with each other difficult because of their diverse backgrounds and the differences in the media in which they realize their circuits. Radio-frequency IC design, particularly in CMOS, is a different activity altogether from discrete RF design. This book is intended as both a link to the past and a pointer to the future.
The contents of this book derive from a set of notes used to teach a one-term advanced graduate course on RF IC design at Stanford University. The course was a follow-up to a low-frequency analog IC design class, and this book therefore assumes that the reader is intimately familiar with that subject, described in standard texts such as Analysis and Design of Analog Integrated Circuits by P. R. Gray and R. G. Meyer (Wiley, 1993). Some review material is provided, so that the practicing engineer with a few neurons surviving from undergraduate education will be able to dive in without too much disorientation.
There are two important regimes of operating frequency, distinguished by whether one may treat circuit elements as “lumped” or distributed. The fuzzy boundary between these two regimes concerns the ratio of the physical dimensions of the circuit relative to the shortest wavelength of interest. At high enough frequencies, the size of the circuit elements becomes comparable to the wavelengths, and one cannot employ with impunity intuition derived from lumped-circuit theory. Wires must then be treated as the transmission lines that they truly are, Kirchhoff's “laws” no longer hold generally, and identification of R, L, and C ceases to be obvious (or even possible).
Thanks to the small dimensions involved in ICs, though, it turns out that we can largely ignore transmission-line effects well into the gigahertz range, at least on-chip. So, in this text, we will focus primarily on lumped-parameter descriptions of our circuits. For the sake of completeness, however, we should be a little less cavalier (we'll still be cavalier, just less so) and spend some time talking about how and where one draws a boundary between lumped and distributed domains. In order to do this properly, we need to revisit (briefly) Maxwell's equations.
MAXWELL AND KIRCHHOFF
Many students (and many practicing engineers, unfortunately) forget that Kirchhoff's voltage and current “laws” are approximations that hold only in the lumped regime (which we have yet to define).
The sensitivity of communications systems is limited by noise. The broadest definition of noise as “everything except the desired signal” is most emphatically not what we will use here, however, because it does not separate, say, artificial noise sources (e.g., 60-Hz power-line hum) from more fundamental (and therefore irreducible) sources of noise that we discuss in this chapter.
That these fundamental noise sources exist was widely appreciated only after the invention of the vacuum tube amplifier, when engineers finally had access to enough gain to make these noise sources noticeable. It became obvious that simply cascading more amplifiers eventually produces no further improvement in sensitivity because a mysterious noise exists that is amplified along with the signal. In audio systems, this noise is recognizable as a continuous hiss while, in video, the noise manifests itself as the characteristic “snow” of analog TV systems.
The noise sources remained mysterious until H. Nyquist, J. B. Johnson and W. Schottky published a series of papers that explained where the noise comes from and how much of it to expect. We now turn to an examination of the noise sources they identified.
THERMAL NOISE
Johnson was the first to report careful measurements of noise in resistors, and his colleague Nyquist explained them as a consequence of Brownian motion: thermally agitated charge carriers in a conductor constitute a randomly varying current that gives rise to a random voltage (via Ohm's law).
This chapter focuses attention on those aspects of transistor behavior that are of immediate relevance to the RF circuit designer. Separation of first-order from higher-order phenomena is emphasized, so there are many instances when crude approximations are presented in the interest of developing insight. As a consequence, this review is intended as a supplement to – rather than a replacement for – traditional rigorous treatments of the subject. In particular, we must acknowledge that today's deepsubmicron MOSFET is so complex a device that simple equations cannot possibly provide anything other than first-order (maybe even zeroth-order) approximations to the truth. The philosophy underlying this chapter is to convey a simple story that will enable first-pass designs, which are then verified by simulators using much more sophisticated models. Qualitative insights developed with the aid of the zeroth-order models enable the designer to react appropriately to bad news from the simulator. We design with a simpler set of models than those used for verification.
With that declaration out of the way, we now turn to some history before launching into a series of derivations.
A LITTLE HISTORY
Attempts to create field-effect transistors (FETs) actually predate the development of bipolar devices by over twenty years. In fact, the first patent application for a FET-like transistor was filed in 1926 by Julius Lilienfeld, but he never constructed a working device. Before co-inventing the bipolar transistor, William Shockley also tried to modulate the conductivity of a semiconductor to create a field-effect transistor.
Since publication of the first edition of this book in 1998, RF CMOS has made a rapid transition to commercialization. Back then, the only notable examples of RF CMOS circuits were academic and industrial prototypes. No companies were then shipping RF products using this technology, and conference panel sessions openly questioned the suitability of CMOS for such applications – often concluding in the negative. Few universities offered an RF integrated circuit design class of any kind, and only one taught a course solely dedicated to CMOS RF circuit design. Hampering development was the lack of device models that properly accounted for noise and impedance at gigahertz frequencies. Measurements and models so conflicted with one another that controversies raged about whether deep submicron CMOS suffered from fundamental scaling problems that would forever prevent the attainment of good noise figures.
Today, the situation is quite different, with many companies now manufacturing RF circuits using CMOS technology and with universities around the world teaching at least something about CMOS as an RF technology. Noise figures below 1 dB at gigahertz frequencies have been demonstrated in practical circuits, and excellent RF device models are now available. That pace of growth has created a demand for an updated edition of this textbook.
By
Fanny S. Demers, The Johns Hopkins University and is Associate Professor at Carleton University, Ottawa, Canada.,
Michael Demers, PhD from The Johns Hopkins University and is Associate Professor at Carleton University, Ottawa, Canada.,
Sumru Altug, Professor of Economics at Koc¸ University in Istanbul, Turkey.
Investment decisions occupy a central role among the determinants of growth. As empirical studies such as Levine and Renelt (1992) have revealed, fixed investment as a share of gross domestic product (GDP) is the most robust explanatory variable of a country's growth. DeLong and Summers (1991) also provides evidence emphasizing the correlation of investment in equipment and machinery with growth. Investment is also the most variable component of GDP, and therefore an understanding of its determinants may shed light on the source of cyclical fluctuations. Policy-makers are typically concerned about the ultimate impact of alternative policy measures on investment and its variability. Several theories of investment have emerged since the 1960s in an attempt to explain the determinants of investment. The most notable of these have been the neoclassical model of investment, the cost-of-adjustment-Q-theory model, the time-to-build model, the irreversibility model under uncertainty and the fixed-cost (S, s) model of lumpy investment.
Beginning with the neoclassical model developed by Jorgenson and his collaborators (see for example, Hall and Jorgenson 1967; Jorgenson 1963), investment theory distinguishes between the actual capital stock and the desired or optimal capital stock, where the latter is determined by factors such as output and input prices, technology and interest rates. In the neoclassical model of investment, an exogenous partial adjustment mechanism is postulated to yield a gradual adjustment of the actual capital stock to its desired level as is observed in the data. An alternative way of obtaining a determinate rate of investment is to assume the existence of convex costs of adjustment, as has been proposed by Eisner and Strotz (1963), Gould (1967), Lucas (1976) and Treadway (1968). Abel (1979) and Hayashi (1982) have shown that the standard adjustment cost model leads to a Tobin's Q-theory of investment under perfect competition and a constant returns to scale production technology. A complementary explanation assumes that it takes time to build productive capacity. (See Altug 1989, 1993; Kydland and Prescott 1982.)
A number of authors have emphasised irreversibility and uncertainty as important factors underlying the gradual adjustment of the capital stock. The notion of irreversibility, which can be traced back toMarschak (1949) and subsequently to Arrow (1968), was initially applied in the context of environmental economics where economic decisions, such as the destruction of a rain forest to build a highway, often entail actions that cannot be ‘undone.’
This chapter discusses work which applies the methods of stochastic dynamic programming (SDP) to the explanation of consumption and saving behaviour. The emphasis is on the intertemporal consumption and saving choices of individual decision-makers, which I will normally label as ‘households’. There are at least two reasons why it is important to try to explain such choices: first, it is intrinsically interesting; and, second, it is useful as a means of understanding, and potentially forecasting, movements in aggregate consumption, and thus contributing to understanding and/or forecasts of aggregate economic fluctuations. The latter motivation needs no further justification, given the priority which policy-makers attach to trying to prevent fluctuations in economic activity. The former motivation – intrinsic interest – is less often stressed by economists, but it is hard to see why: it is surely worthwhile for humankind to improve its understanding of human behaviour, and economists, along with other social scientists, have much to contribute here.
The application of SDP to household consumption behaviour is very recent, with the first published papers appearing only at the end of the 1980s. Young though it is, this research programme has already changed significantly the way in which economists now analyse consumption choice, and has overturned a number of previously widely held views about consumption behaviour. Any research programme which achieves such outcomes so quickly would normally be judged a success, and in many respects this is an appropriate judgement here. The judgement needs to be qualified, however, on at least two counts. First, some of the ideas which the SDP research programme has overturned, although previously widely believed by mainstream economists, were never subscribed to by those working outside the mainstream. Non-mainstream economists might argue that the SDP programme has simply allowed the mainstream to catch up with their own thinking. Second, there is room for doubt that SDP methods really capture at all well the ways in which humans actually make decisions.
These issues are considered in the rest of this chapter. Section 2 reviews the development of economists’ thinking about consumption behaviour since the time of Keynes, and places the SDP programme in this longer term context. Section 3 looks in more detail at some of the most prominent contributions to the SDP research programme. Section 4 considers criticisms of the SDP programme, and looks at other possible approaches to modelling consumption behaviour.
This chapter studies the two main financial building blocks of simulation models: the consumption-based asset pricing model and the definition of asset classes. The aim is to discuss what different modelling choices imply for asset prices. For instance, what is the effect of different utility functions, investment technologies, monetary policies and leverage on risk premia and yield curves?
The emphasis is on surveying existing models and discussing the main mechanisms behind the results. I therefore choose to work with stylised facts and simple analytical pricing expressions. There are no simulations or advanced econometrics in this chapter. The following two examples should give the flavour. First, I use simple pricing expressions and scatter plots to show that the consumption-based asset pricing model cannot explain the cross-sectional variation of Sharpe ratios. Second, I discuss how the slope of the real yield curve is driven by the autocorrelation of consumption by studying explicit log-linear pricing formulas of just two assets: a one-period bond and a one-period forward contract.
The plan of the chapter is as follows. Section 2 deals with the consumption-based asset pricing model. It studies if the model is compatible with historical consumption and asset data. From a modelling perspective, the implicit question is: if my model could create a realistic consumption process and has defined realistic assets (for instance, levered equity), would it then predict reasonable asset returns? Section 3 deals with how assets are defined in simulation models and what that implies for pricing. I discuss yield curves (real and nominal), claims on consumption (oneperiod and multi-period), options and levered equity. Section 4 summarises the main findings. Technical details are found in a number of appendices (p. 444).
PROBLEMS WITH THE CONSUMPTION-BASED ASSET PRICING MODEL
This part of the chapter takes a hard look at the consumption-based asset pricing model since it is one of the building blocks in general equilibrium models. The approach is to derive simple analytical pricing expressions and to study stylised facts – with the aim of conveying the intuition for the results.
The first sections below look at earlier findings on the equity premium puzzle and the risk-free rate puzzle and study if they are stable across different samples (see the surveys of Bossaert 2002; Campbell 2001; Cochrane 2001; and Smith and Wickens 2002).