To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It was characteristic of the great modern philosophers to attempt, each in his own way, to rebuild philosophy from the ground up. Kant embraced this goal more fully than any other classical modern philosopher. And his work did in fact change philosophy permanently, though not always as he intended. He wanted to show that philosophers and natural scientists were not able, and would never be able, to give final answers to questions about the nature of the physical world and of the human mind or soul, and about the existence and attributes of a supreme being. While he did not accomplish precisely that, his work changed philosophy's conception of what can be known, and how it can be known. Kant also wanted to set forth new and permanent doctrines in metaphysics and morals. Though his exact teachings have not gained general acceptance, they continue to inspire new positions in philosophical discussion today.
Kant stands at the center of modern philosophy. His criticism of previous work in metaphysics and the theory of knowledge, propounded in the Critique of Pure Reason and summarized in the Prolegomena, provided a comprehensive response to early modern philosophy and a starting point for subsequent work. He rejected previous philosophical explanations of philosophical cognition itself. His primary target was the rationalist use of reason or “pure intellect” – advanced by Descartes and Leibniz – as a basis for making claims about God and the essences of mind and matter.
The subject of CMOS RF integrated circuit design resides at the convergence of two very different engineering traditions. The design of microwave circuits and systems has its origins in an era where devices and interconnect were usually too large to allow a lumped description. Furthermore, the lack of suitably detailed models and compatible computational tools forced engineers to treat systems as two-port “black boxes” with frequency-domain graphical methods. The IC design community, on the other hand, has relied on the development of detailed device models for use with simulation tools that allow both frequency- and time-domain analysis. As a consequence, engineers who work with traditional RF design techniques and those schooled in conventional IC design often find it difficult to converse. Clearly, a synthesis of these two traditions is required.
Analog IC designers accustomed to working with lower-frequency circuits tend to have, at best, only a passing familiarity with two staples of traditional RF design: Smith charts and S-parameters (“scattering” parameters). Although Smith charts today are less relevant as a computational aid than they once were, RF instrumentation continues to present data in Smith-chart form. Furthermore, these data are often S-parameter characterizations of two-ports, so it is important, even in the “modern” era, to know something about Smith charts and S-parameters. This chapter thus provides a brief derivation of the Smith chart, along with an explanation of why S-parameters won out over other parameter sets (e.g., impedance or admittance) to describe microwave two-ports.
Phase-locked loops (PLLs) have become ubiquitous in modern communications systems because of their remarkable versatility. As one important example, a PLL may be used to generate an output signal whose frequency is a programmable, rational multiple of a fixed input frequency. The output of such frequency synthesizers may be used as the local oscillator signal in superheterodyne transceivers. Phase-locked loops may also be used to perform frequency modulation and demodulation, as well as to regenerate the carrier from an input signal in which the carrier has been suppressed. Their versatility extends to purely digital systems as well, where PLLs are indispensable in skew compensation, clock recovery, and the generation of clock signals.
To understand in detail how PLLs may perform such a vast array of functions, we will need to develop linearized models of these feedback systems. But first, of course, we begin with a little history to put this subject in its proper context.
A SHORT HISTORY OF PLLS
The earliest description of what is now known as a PLL was provided by H. de Bellescize in 1932. This early work offered an alternative architecture for receiving and demodulating AM signals, using the degenerate case of a superheterodyne receiver in which the intermediate frequency is zero. With this choice, there is no image to reject, and all processing downstream of the frequency conversion takes place in the audio range.
Given the effort expended in avoiding instability in most feedback systems, it would seem trivial to construct oscillators. However, simply generating some periodic output is not sufficient for modern high-performance RF receivers and transmitters. Issues of spectral purity and amplitude stability must be addressed.
In this chapter, we consider several aspects of oscillator design. First, we show why purely linear oscillators are a practical impossibility. We then present a linearization technique that uses describing functions to develop insight into how nonlinearities affect oscillator performance, with a particular emphasis on predicting the amplitude of oscillation.
A survey of resonator technologies is included, and we also revisit PLLs, this time in the context of frequency synthesizers. We conclude this chapter with a survey of oscillator architectures. The important issue of phase noise is considered in detail in Chapter 18.
THE PROBLEM WITH PURELY LINEAR OSCILLATORS
In negative feedback systems, we aim for large positive phase margins to avoid instability. To make an oscillator, then, it might seem that all we have to do is shoot for zero or negative phase margins. Let's examine this notion more carefully, using the root locus for positive feedback sketched in Figure 17.1.
This locus recurs frequently in oscillator design because it applies to a two-pole bandpass resonator with feedback. As seen in the locus, the closed-loop poles lie exactly on the imaginary axis for some particular value of loop transmission magnitude.
The first stage of a receiver is typically a low-noise amplifier (LNA), whose main function is to provide enough gain to overcome the noise of subsequent stages (such as a mixer). Aside from providing this gain while adding as little noise as possible, an LNA should accommodate large signals without distortion, and frequently must also present a specific impedance, such as 50 Ω, to the input source. This last consideration is particularly important if a passive filter precedes the LNA, since the transfer characteristics of many filters are quite sensitive to the quality of the termination.
In principle, one can obtain the minimum noise figure from a given device by using the optimum source impedance defined by the four noise parameters: Gc, Bc, Rn, and Gu. This classical approach has important shortcomings, however, as described in the previous chapter. For example, the source impedance that minimizes the noise figure generally differs, perhaps considerably, from that which maximizes the power gain. Hence, it is possible for poor gain and a bad input match to accompany a good noise figure. Additionally, power consumption is an important consideration in many applications, but classical noise optimization simply ignores power consumption altogether. Finally, such an approach presumes that one is given a device with fixed characteristics, and thus offers no explicit guidance on how best to exercise the IC designer's freedom to tailor device geometries.
We asserted in the previous chapter that tuned oscillators produce outputs with higher spectral purity than relaxation oscillators. One straightforward reason is simply that a high-Q resonator attenuates spectral components removed from the center frequency. As a consequence, distortion is suppressed, and the waveform of a well-designed tuned oscillator is typically sinusoidal to an excellent approximation.
In addition to suppressing distortion products, a resonator also attenuates spectral components contributed by sources such as the thermal noise associated with finite resonator Q, or by the active element(s) present in all oscillators. Because amplitude fluctuations are usually greatly attenuated as a result of the amplitude stabilization mechanisms present in every practical oscillator, phase noise generally dominates – at least at frequencies not far removed from the carrier. Thus, even though it is possible to design oscillators in which amplitude noise is significant, we focus primarily on phase noise here. We show later that a simple modification of the theory allows the accommodation of amplitude noise as well, permitting the accurate computation of output spectrum at frequencies well removed from the carrier.
Aside from aesthetics, the reason we care about phase noise is to minimize the problem of reciprocal mixing. If a superheterodyne receiver's local oscillator is completely noise-free, then two closely-spaced RF signals will simply translate downward in frequency together. However, the LO spectrum is not an impulse and so, to be realistic, we must evaluate the consequences of an impure LO spectrum.
Because of its high performance, the superheterodyne is the only basic architecture presently in use for both receivers and transmitters. One should not then infer, however, that all receivers and transmitters are therefore topologically identical, for there are many variations on a basic theme. For example, we will see that it may be desirable to use more than one intermediate frequency to aid the rejection of certain signals, leading to a question of how many IFs there should be, and what frequencies they should have. Answering those questions is known as frequency planning, and converging on an acceptable frequency plan generally involves a substantial amount of iteration.
An important constraint is that on-chip energy storage elements generally consume significant die area. Furthermore, they tend not to scale gracefully (if at all) as technology improves. Hence, the “ideal” integrated architecture should require the minimum number of energy storage elements, and there are continuing efforts even to eliminate the need for high-quality filters through architectural means. Complete success has been elusive, though, and one must accept that the desired performance frequently may be achieved only if external filters are used. It is not too much of an exaggeration to assert that architectures are essentially determined by available filter technology.
Once a basic architecture and its associated frequency plan have been chosen, other key considerations include how best to distribute the huge power gain (typically 120–140 dB for receivers) among the various stages.
The previous chapter on amplifier design generally ignored the issue of generating suitable bias voltages or currents. This neglect was by conscious design in order to minimize clutter in the circuit diagrams. In this chapter we finally take up the study of this important topic, focusing on a variety of ways to generate voltages and currents that are relatively independent of supply voltage and temperature. Because CMOS offers relatively limited options for realizing bias circuits, we'll see that some of the most useful biasing idioms are actually those based on bipolar circuits. A parasitic bipolar device exists in every CMOS technology and may be used, for example, in a bandgap voltage reference. Even though the characteristics of parasitic transistors are far from ideal, the performance of bias circuits made with such devices is frequently vastly superior to that of “pure” CMOS bias circuits.
In what follows, it is worthwhile to keep in mind that any voltage we produce must depend on some collection of parameters that ultimately have the dimensions of a voltage (such as kT/q, for example). Similarly, any current we produce must depend on parameters that ultimately have the dimensions of current (such as V/R). Although seemingly obvious and trivial statements, we'll see that they are extremely useful guides for the design of stable references.
In this chapter, we study the problem of delivering RF power efficiently to a load. As we'll discover very quickly, scaled-up versions of the small-signal amplifiers we've studied so far are fundamentally incapable of high efficiency, and other approaches must be considered. As usual, tradeoffs are involved, this time among linearity, power gain, output power, and efficiency.
In a continuing quest for increased channel capacity, more and more communications systems employ amplitude and phase modulation together. This trend brings with it an increased demand for much higher linearity (possibly in both amplitude and phase domains). The variety of power amplifier topologies reflects the inability of any single circuit to satisfy all requirements.
GENERAL CONSIDERATIONS
Contrary to what one's intuition might suggest, the maximum power transfer theorem is largely useless in the design of power amplifiers. One minor reason is that it isn't entirely clear how to define impedances in a large-signal, nonlinear system. A more important reason is that even if we were able to solve that little problem and subsequently arrange for a conjugate match, the efficiency would be only 50% because equal amounts of power are then dissipated in the source and load. In many cases, this value is unacceptably low. As an extreme (but realistic) example, consider the problem of delivering 50 kW into an antenna if the amplifier is only 50% efficient. The circuit dissipation would be 50 kW as well, presenting a rather challenging thermal management problem.
One characteristic of RF circuits is the relatively large ratio of passive to active components. In stark contrast with digital VLSI circuits (or even with other analog circuits, such as op-amps), many of those passive components may be inductors or even transformers. This chapter hopes to convey some underlying intuition that is useful in the design of RLC networks. As we build up that intuition, we'll begin to understand the many good reasons for the preponderance of RLC networks in RF circuits. Among the most compelling of these are that they can be used to match or otherwise modify impedances (important for efficient power transfer, for example), cancel transistor parasitics to provide high gain at high frequencies, and filter out unwanted signals.
To understand how RLC networks may confer these and other benefits, let's revisit some simple second-order examples from undergraduate introductory network theory. By looking at how these networks behave from a couple of different viewpoints, we'll build up intuition that will prove useful in understanding networks of much higher order.
PARALLEL RLC TANK
Let's just jump right into the study of a parallel RLC circuit. As you probably know, this circuit exhibits resonant behavior; we'll see what this implies momentarily. This circuit is also often called a tank circuit (or simply tank).
Most circuit analysis proceeds with the assumptions of linearity and time invariance. Violations of those assumptions, if considered at all, are usually treated as undesirable. However, the high performance of modern communications equipment actually depends critically on the presence of at least one element that fails to satisfy linear time invariance: the mixer. We will see shortly that mixers are still ideally linear but depend fundamentally on a purposeful violation of time invariance. As noted in Chapter 1, the superheterodyne receiver uses a mixer to perform an important frequency translation of signals. Armstrong's invention has been the dominant architecture for 70 years because this frequency translation solves many problems in one fell swoop (see Figure 13.1).
In this architecture, the mixer translates an incoming RF signal to a lower frequency, known as the intermediate frequency (IF). Although Armstrong originally sought this frequency lowering simply to make it easier to obtain the requisite gain, other significant advantages accrue as well. As one example, tuning is now accomplished by varying the frequency of a local oscillator, rather than by varying the center frequency of a multipole bandpass filter. Thus, instead of adjusting several LC networks in tandem to tune to a desired signal, one simply varies a single LC combination to change the frequency of a local oscillator (LO). The intermediate frequency stages can then use fixed bandpass filters.
Integrated circuit engineers have the luxury of taking for granted that the incremental cost of a transistor is essentially zero, and this has led to the high-device-count circuits that are common today. Of course, this situation is a relatively recent development; during most of the history of electronics, the economics of circuit design were the inverse of what they are today. It really wasn't all that long ago when an engineer was forced by the relatively high cost of active devices to try to get blood (or at least rectification) from a stone. And it is indeed remarkable just how much performance radio pioneers were able to squeeze out of just a handful of components. For example, we'll see how American radio genius Edwin Armstrong devised circuits in the early 1920s that trade log of gain for bandwidth, contrary to the conventional wisdom that gain and bandwidth should trade off more or less directly. And we'll see that at the same time Armstrong was developing those circuits, self-taught Soviet radio engineer Oleg Losev was experimenting with blue LEDs and constructing completely solid-state radios that functioned up to 5 MHz, a quarter century before the transistor was invented.
These fascinating stories are rarely told because they tend to fall into the cracks between history and engineering curricula. Somebody ought to tell these stories, though, since in so doing, many commonly asked questions (“why don't they do it this way?”) are answered.
A solid understanding of feedback is critical to good circuit design, yet many practicing engineers have at best a tenuous grasp of the subject. This chapter is an overview of the foundations of classical control theory – that is, the study of feedback in single-input, single-output, time-invariant, linear continuous-time systems. We'll see how to apply this knowledge to the design of oscillators, highly linear broadband amplifiers, and phase-locked loops, among other examples. We'll also see how to extend our design intuition to include many nonlinear systems of practical interest.
As usual, we'll start with a little history to put this subject in its proper context.
A BRIEF HISTORY OF MODERN FEEDBACK
Although application of feedback concepts is very ancient (Og annoy tiger, tiger eat Og), mathematical treatments of the subject are a recent development. Maxwell himself offered the first detailed stability analyses, in a paper on the stability of the rings of Saturn (for which he won his first mathematical prize), and a later one on the stability of speed-controlled steam engines.
The first conscious application of feedback principles in electronics was apparently by rocket pioneer Robert Goddard in 1912, in a vacuum tube oscillator that employed positive feedback. As far as is known, however, his patent application was his only writing on the subject (he was sort of preoccupied with that rocketry thing, after all), and his contemporaries were largely ignorant of his work in this field.