To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The first stage of a receiver is typically a low-noise amplifier (LNA), whose main function is to provide enough gain to overcome the noise of subsequent stages (typically a mixer). Aside from providing this gain while adding as little noise as possible, an LNA should accommodate large signals without distortion and frequently must also present a specific impedance, such as 50Ω, to the input source. This last consideration is particularly important if a filter precedes the LNA, since the transfer characteristics of many filters (both passive and active) are quite sensitive to the quality of the termination.
We will see that one can obtain the minimum noise figure (NF) from a given device by using a particular magic source impedance whose value depends on the characteristics of the device. Unfortunately this source impedance generally differs, perhaps considerably, from that which maximizes power gain. Hence it is possible for poor gain and a bad input match to accompany a good noise figure. One aim of this chapter is to place this trade-off on a quantitative basis to assure a satisfactory design without painful iteration.
We will focus mainly on a single narrowband LNA architecture that it is capable of delivering near-minimum noise figures along with an excellent impedance match and reasonable power gain. The narrowband nature of the amplifier is not necessarily a liability, since many applications require filtering anyway. The LNA we'll study thus exhibits a balance of many desirable characteristics.
Phase-locked loops (PLLs) have become ubiquitous in modern communications systems because of their remarkable versatility. As one important example, a PLL may be used to generate an output signal whose frequency is a programmable and rational multiple of a fixed input frequency. Such frequency synthesizers are often used to provide the local oscillator signal in superheterodyne transceivers. PLLs may also be used to perform frequency modulation and demodulation as well as to regenerate the carrier from an input signal in which the carrier has been suppressed. Their versatility also extends to purely digital systems, where PLLs are indispensable in skew compensation, clock recovery, and the generation of clock signals.
To understand in detail how PLLs may perform such a vast array of functions, we will need to develop linearized models of these feedback systems. But first, of course, we begin with a little history to put this subject in its proper context.
A SHORT HISTORY OF PLLs
The earliest description of what is now known as a PLL was provided by de Bellescize in 1932. This early work offered an alternative architecture for receiving and demodulating AM signals, using the degenerate case of a superheterodyne receiver in which the intermediate frequency is zero. With this choice, there is no image to reject, and all processing downstream of the frequency conversion takes place in the audio range.
Electrical engineering education focuses so much on the study of linear, time-invariant (LTI) systems that it's easy to conclude that there's no other kind. Violations of the LTI assumption are usually treated as undesirable, if acknowledged at all. Small signal analysis, for example, exists precisely to avoid the complexities that nonlinearities inevitably bring with them. However, the high performance of modern communications equipment actually depends critically on the presence of at least one element that fails to satisfy linear time invariance: the mixer. The superheterodyne receiver uses a mixer to perform an important frequency translation of signals. This invention of Armstrong has been the dominant architecture for 75 years because frequency translation solves many problems simultaneously.
In the architecture shown in Figure 10.1, the mixer heterodynes an incoming RF signal to a lower frequency, known as the intermediate frequency (IF). Although Armstrong originally sought this frequency lowering simply to make it easier to obtain the requisite gain, other significant advantages accrue as well. As one example, tuning is now accomplished by varying the frequency of a local oscillator rather than by varying the center frequency of a multipole bandpass filter. Thus, instead of adjusting several LC networks in tandem to tune to a desired signal, one simply varies a single LC combination to change the frequency of a local oscillator (LO). The intermediate frequency stages can then use fixed bandpass filters. Selectivity is therefore determined by these fixed-frequency IF filters, which are much easier to realize than variable-frequency filters.
Oscilloscopes and spectrum analyzers are ubiquitous pieces of test equipment in any RF laboratory. The reason, of course, is that it is useful to study signals in both time and frequency domains, despite the fact that both presentations theoretically provide equivalent information.
Most electrical engineers are familiar with basic operational principles of lower frequency oscilloscopes. However, an incomplete understanding of how probes behave (particularly with respect to grounding technique) is still remarkably widespread. The consequences of this ignorance only become worse as the frequency increases and so, after a brief review of a conventional low-frequency scope, our primary focus will be the additional considerations one must accommodate when using scopes at gigahertz frequencies. Also, because the sampling oscilloscopes commonly used at high frequencies have subtle ways of encouraging “pilot error,” we'll spend some time studying how they work and how to avoid being fooled by them. High-speed sampling circuits are interesting in their own right, so these types of scopes give us a nice excuse to spend a little bit of time examining how samplers function.
Another amazing instrument is the modern spectrum analyzer (with cost approximately proportional to the square of amazement), which is capable of making measurements over a wide dynamic range (e.g., 80–100 dB SFDR) and over a large frequency span (e.g., near DC to 20 GHz in a single instrument).
The modern microwave diode owes its existence to the demands of military radar during the Second World War. Vacuum tubes of the time were simply unable to operate in the multi-GHz radar frequency bands. Fortunately, the seeds of a breakthrough had been planted in the mid-1930s by Bell Labs scientist George C. South worth during his early work with cylindrical waveguides. Proving the old adage that “necessity is the mother of invention,” an inspired bit of thinking by his colleague, Russell Ohl, led him to try out crystal detectors (then nearly obsolete) as power sensors, hoping that the low capacitance associated with point contacts would permit operation at the high frequencies he was using (within an octave of 1 GHz). Promising results of tests on silicon confirmed that such diodes do indeed succeed where vacuum tubes fail. A crash development program by the MIT Radiation Laboratory and others successfully delivered reliable point-contact silicon diodes capable of operating at frequencies in excess of 30 GHz by the end of the war.
In this chapter, we examine diodes of this type. However, we also broaden the term “diode” to include many other two-terminal semiconductor elements that find use in microwave circuits. So, in addition to ordinary junction and Schottky diodes, we'll consider varactors (parametric diode), tunnel diodes (including backward diodes), PIN diodes, noise diodes, snap-off (step recovery) diodes, Gunn diodes, MIM diodes, and IMPATT diodes.
In this chapter we examine the properties of passive components commonly used in RF work. Because parasitic effects can easily dominate behavior at gigahertz frequencies, our focus is on the development of simple analytical models for parasitic inductance and capacitance of various discrete components.
INTERCONNECT AT RADIO FREQUENCIES: SKIN EFFECT
At low frequencies, the properties of interconnect we care about most are resistivity, current-handling ability, and perhaps capacitance. As frequency increases, we find that inductance might become important. Furthermore, we invariably discover that the resistance increases owing to the skin effect alluded to in Chapter 5.
Skin effect is usually described as the tendency of current to flow primarily on the surface (skin) of a conductor as frequency increases. Because the inner regions of the conductor are thus less effective at carrying current than at low frequencies, the useful cross-sectional area of a conductor is reduced, thereby producing a corresponding increase in resistance.
From that perfunctory and somewhat mysterious description, there is a risk of leaving the impression that all “skin” of a conductor will carry RF current equally well. To develop a deeper understanding of the phenomenon, we need to appreciate explicitly the role of the magnetic field in producing the skin effect. To do so qualitatively, let's consider a solid cylindrical conductor carrying a time-varying current, as depicted in Figure 6.1.
The subject of filter design is so vast that we have to abandon all hope of doing justice to it in any subset of a textbook. Indeed, even though we have chosen to distribute this material over two chapters, the limited aim here is to focus on important qualitative ideas and practical information about filters – rather than attempting a comprehensive review of all possible filter types or supplying complete mathematical details of their underlying theory. For those interested in the rigor that we will tragically neglect, we will be sure to provide pointers to the relevant literature. And for those who would rather ignore the modest amount of rigor that we do provide, the reader is invited to skip directly to the end of this chapter for the tables that summarize the design of several common filter types in “cookbook” form.
Although our planar focus would normally imply a discussion limited to microstrip implementations, many such filters derive directly from lower-frequency lumped prototypes. Because so many key concepts may be understood by studying those prototypes, we will follow a roughly historical path and begin with a discussion of lumped filter design. It is definitely the case that certain fundamental insights are universal, and it is these that we will endeavor to emphasize in this chapter, despite differences in implementation details between lumped and distributed realizations. We consider only passive filters here, partly to limit the length of the chapter to something manageable.
In propositional or predicate logic, formulas are either true, or false, in any model. Propositional logic and predicate logic do not allow for any further possibilities. From many points of view, however, this is inadequate. In natural language, for example, we often distinguish between various ‘modes’ of truth, such as necessarily true, known to be true, believed to be true and true in the future. For example, we would say that, although the sentence
George W. Bush is president of the United States of America.
is currently true, it will not be true at some point in the future. Equally, the sentence
There are nine planets in the solar system.
while true, and maybe true for ever in the future, is not necessarily true, in the sense that it could have been a different number. However, the sentence
The cube root of 27 is 3.
as well as being true is also necessarily true and true in the future. It does not enjoy all modes of truth, however. It may not be known to be true by some people (children, for example); it may not be believed by others (if they are mistaken).
In computer science, it is often useful to reason about modes of truth. In Chapter 3, we studied the logic CTL in which we could distinguish not only between truth at different points in the future, but also between different futures.
Formal methods have finally come of age! Specification languages, theorem provers, and model checkers are beginning to be used routinely in industry. Mathematical logic is basic to all of these techniques. Until now textbooks on logic for computer scientists have not kept pace with the development of tools for hardware and software specification and verification. For example, in spite of the success of model checking in verifying sequential circuit designs and communication protocols, until now I did not know of a single text, suitable for undergraduate and beginning graduate students, that attempts to explain how this technique works. As a result, this material is rarely taught to computer scientists and electrical engineers who will need to use it as part of their jobs in the near future. Instead, engineers avoid using formal methods in situations where the methods would be of genuine benefit or complain that the concepts and notation used by the tools are complicated and unnatural. This is unfortunate since the underlying mathematics is generally quite simple, certainly no more difficult than the concepts from mathematical analysis that every calculus student is expected to learn.
Logic in Computer Science by Huth and Ryan is an exceptional book. I was amazed when I looked through it for the first time. In addition to propositional and predicate logic, it has a particularly thorough treatment of temporal logic and model checking.
The methods of the previous chapter are suitable for verifying systems of communicating processes, where control is the main issue, but there are no complex data. We relied on the fact that those (abstracted) systems are in a finite state. These assumptions are not valid for sequential programs running on a single processor, the topic of this chapter. In those cases, the programs may manipulate non-trivial data and – once we admit variables of type integer, list, or tree – we are in the domain of machines with infinite state space.
In terms of the classification of verification methods given at the beginning of the last chapter, the methods of this chapter are
Proof-based. We do not exhaustively check every state that the system can get in to, as one does with model checking; this would be impossible, given that program variables can have infinitely many interacting values. Instead, we construct a proof that the system satisfies the property at hand, using a proof calculus. This is analogous to the situation in Chapter 2, where using a suitable proof calculus avoided the problem of having to check infinitely many models of a set of predicate logic formulas in order to establish the validity of a sequent.
Semi-automatic. Although many of the steps involved in proving that a program satisfies its specification are mechanical, there are some steps that involve some intelligence and that cannot be carried out algorithmically by a computer. As we will see, there are often good heuristics to help the programmer complete these tasks. This contrasts with the situation of the last chapter, which was fully automatic.
There is a great advantage in being able to verify the correctness of computer systems, whether they are hardware, software, or a combination. This is most obvious in the case of safety-critical systems, but also applies to those that are commercially critical, such as mass-produced chips, mission critical, etc. Formal verification methods have quite recently become usable by industry and there is a growing demand for professionals able to apply them. In this chapter, and the next one, we examine two applications of logics to the question of verifying the correctness of computer systems, or programs.
Formal verification techniques can be thought of as comprising three parts:
a framework for modelling systems, typically a description language of some sort;
a specification language for describing the properties to be verified;
a verification method to establish whether the description of a system satisfies the specification.
Approaches to verification can be classified according to the following criteria:
Proof-based vs. model-based. In a proof-based approach, the system description is a set of formulas Γ (in a suitable logic) and the specification is another formula ϕ. The verification method consists of trying to find a proof that Γ ⊢ ϕ. This typically requires guidance and expertise from the user.