To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The period dealt with in this book covers between 1,500 and 1,800 years. There are, potentially, two major ways of dividing up this long span of time in order to identify the main phases and to pinpoint the changes which took place. The first is the purely archaeological method, which provides a relative system only, while the second, using scientific methods such as radiocarbon dating, gives us absolute dates. In some rare cases the absolute dates obtained by scientific methods can be cross-checked against the dates obtained from historical or quasi-historical inscriptions, but in this early period that is unusual. The archaeological or relative sequence begins with the Uruk period, named after the site at which the distinctive plain pottery was first identified, and we will look first at the other archaeologically recognisable characteristics of this and the succeeding periods. These can then be used to establish the relative dating of sites.
The Uruk period was a long one which may have lasted more than a thousand years, and it is conventionally divided into three phases. The earliest is poorly understood, while for the middle phase the best evidence comes from outside south Mesopotamia and is to be found at sites like Sheikh Hassan on the middle Euphrates, Tell Brak in eastern Syria and Hacinebi Tepe in Turkey (see Schwartz 2001; Stein 2001). It is only in the late Uruk phase that we have good evidence from the south.
This book has described individually each of the ‘building blocks’ of what is rather loosely called Sumerian civilisation. This civilisation was, more realistically, the result of a fertile amalgam of peoples of different linguistic and geographic origins amongst whom those who spoke Sumerian seem to have been in the majority. The first two chapters looked at the basic parameters, the physical environment, and the historical and chronological framework of the period. Then the way in which the inhabitants had used the environment was explored and the settlement patterns which characterised each phase of the 1,800 years covered by this study were summarised. An attempt to interpret what the observed changes in these patterns may have meant in terms of political and historical developments followed. The internal characteristics of the more complex settlements, their layout and the major public buildings which dominated them were then described and an attempt was made to relate these changes, too, to the wider issues of changes in the way of life and in political thought or ideology. This information was then contrasted with newly available evidence from Upper Mesopotamia where developments followed a rather different trajectory.
The life of private individuals was explored, firstly by looking at housing in both rural and urban settings and then by looking at graves. Paradoxically, the grave goods tell us more about the living than do their houses, for it was only in death that the accessories of everyday life escaped the endless recycling which was a crucial feature of the Sumerian economy.
Some of the most exciting and important archaeological sites to have been explored in the past twenty years are located in the area called, for convenience, Upper Mesopotamia. It lies between the Euphrates and the Tigris north of the Hit–Samarra line and south of the Taurus mountains. Today it is partly in Syria and partly in Iraq. The area used to be seen as a poor, provincial relation of the civilised southern plain, but thanks to a mass of new evidence the picture has changed dramatically for the period in which we are interested and archaeologists are now rethinking many of their old assumptions. In the course of the later fourth and the third millennium the north was the home of a number of distinctive urban cultures whose influence on the south is beginning to be hotly debated (Map 8).
North Mesopotamia lies in what was described in chapter 2 as our second ecological zone, which in turn sub-divides into two sub-sections. As we saw, the terrain and the natural resources are very different to those in the south. There are great rolling plains and in the north of the region agriculture is extensive rather than intensive, while in the south of the area it is a highly risky business because of the low and unpredictable rainfall. Although irrigation can be used to enhance yields, rain-fed agriculture is possible over most of the area.
Although, as we have seen in the previous chapter, the temples and ziggurats were the dominant architectural features of Sumerian towns and cities, there is a good deal of evidence for other important buildings, some genuinely monumental, some merely larger and more imposing than their neighbours. It is often difficult to identify the purpose of these buildings and as a convenience, rather than for any more objective reason, they are sometimes labelled as palaces. This nomenclature is misleading. Many of the buildings in question seem to have been multi-purpose; some of them may have housed officials of either temple or state; some may have been the homes of wealthy extended families; some may have been industrial units; many seem to have shared several of these functions.
More significantly, the use of the word ‘palace’ implies a whole political system which may well not have existed until the later part of the third millennium. We must be cautious about assuming the presence of a king earlier than this. Evidence for a dichotomy of ‘church’ and ‘state’ in the organisation of the Sumerian city before ED III is slight. The ruler and his wife had religious duties which were seen as essential to the well-being of his people and sometimes the king actually held priestly office as well. At a later stage, of course, he became a god himself, so that the secular and religious halves of the state always seem to have been closely interwoven and inextricably intertwined.
As we saw in chapter 5 the private houses tell us relatively little about the everyday life of the people who lived in them. They have few distinctive features and, generally speaking, are poor in small finds. This is not because the people who lived in them were poor and unsophisticated, but because objects tended to be reused until they fell apart and furniture and fittings were not widely used. The plans of the houses, which usually focus inwards on a central hall or court, suggest the self-contained nature of these households and their importance as the basic unit in society. The plans sometimes suggest the segregation of certain members of the household, as they are subdivided into a number of discrete units, which do not seem necessarily to be distinguished by differences of function. The position of the house at the centre of family life is emphasised by the presence, from late ED I onwards, of graves below the floors, representing a return to earlier customs. The graves contain the bones of people of both sexes and all ages, in contrast to earlier times when it was normally only children who were buried in this way. The number of graves is not usually large enough to have contained all the postulated inhabitants of a house, so some members of the family seem to have been singled out for this treatment, while the rest were presumably buried in cemeteries outside the settlements.
The artefacts found in the graves and cemeteries described in the last chapter help us to reconstruct the daily lives of the people with whom they were buried. They also pose a number of questions. How were they made and by whom? Where did the raw materials originate and how did those materials reach Mesopotamia? Some objects are so skilfully finished that they must be the work of highly trained craftsmen; some are mass-produced and suggest the existence of early ‘factories’, a hypothesis the texts support; some, as one might expect, seem to be the products of cottage industries. We seem to be looking at a variety of production methods which become more sophisticated as the period progresses.
It has been suggested by a number of scholars that the rise of complex societies also saw the advent of specialised production and that the two processes are intimately connected. We can distinguish two types of specialised production: the first is primarily to meet the demands of an expanding population by mass production of mainly utilitarian goods. This mass production has an obvious impact on the goods produced: they become standardised and there is no incentive to introduce change, though when change comes it comes very rapidly (Wattenmaker 1998:5–16). The second type is developed to provide the exotic and intricately made artefacts demanded by the elite members of an increasingly stratified society to enhance and consolidate their standing, and is frequently tightly controlled by the elite group.
The design of microwave circuits and systems has its origins in an era where devices and interconnect were usually too large to allow a lumped description. Furthermore, the lack of suitably detailed models and compatible computational tools forced engineers to treat systems as two-port “black boxes” with graphical methods. The most powerful of these graphical aids, the Smith chart, dates from the 1930s, an age where slide rules dominated. Although Smith charts today are perhaps less relevant as a computational aid than they were then, RF instrumentation, for example, continues to present data in Smith-chart form. It also remains true that visualizing certain operations in terms of the Smith chart can inform design intuition in rich ways that modern computational aids may unfortunately bypass. This chapter thus provides a brief history and derivation of the Smith chart, along with an explanation of why a particular set of variables (S-parameters) won out over other parameter sets (e.g., impedance or admittance) to describe microwave two-ports.
THE SMITH CHART
Introductory presentations of the Smith chart are frequently devoid of any historical context, leaving the student with the impression that it sprang forth spontaneously and fully formed. This impression, in turn, makes many students feel mentally deficient if they are unable to appreciate instantly the subtle beauty, logic, and power that the chart must “obviously” possess. The real story, though, is that the Smith chart is the result of cumulative incremental refinements spanning about a decade.
Many histories of microwave technology begin with James Clerk Maxwell and his equations, and for excellent reasons. In 1873, Maxwell published A Treatise on Electricity and Magnetism, the culmination of his decade-long effort to unify the two phenomena. By arbitrarily adding an extra term (the “displacement current”) to the set of equations that described all previously known electromagnetic behavior, he went beyond the known and predicted the existence of electromagnetic waves that travel at the speed of light. In turn, this prediction inevitably led to the insight that light itself must be an electromagnetic phenomenon. Electrical engineering students, perhaps benumbed by divergence, gradient, and curl, often fail to appreciate just how revolutionary this insight was. Maxwell did not introduce the displacement current to resolve any outstanding conundrums. In particular, he was not motivated by a need to fix a conspicuously incomplete continuity equation for current (contrary to the standard story presented in many textbooks). Instead he was apparently inspired more by an aesthetic sense that nature simply should provide for the existence of electromagnetic waves. In any event the word genius, though much overused today, certainly applies to Maxwell, particularly given that it shares origins with genie. What he accomplished was magical and arguably ranks as the most important intellectual achievement of the 19th century.
Maxwell – genius and genie – died in 1879, much too young at age 48. That year, Hermann von Helmholtz sponsored a prize for the first experimental confirmation of Maxwell's predictions.
Although the focus of this book is the implementation of discrete planar RF circuits, we consider here a number of important components that are fundamentally 3-D in nature: connectors, cables, and wave guides. We'll see that the useful frequency range of these components is bounded in part by the onset of moding, which (in turn) is a function of their physical dimensions. In addition, we'll examine the attenuation characteristics of these various ways to get RF energy from one place to another.
CONNECTORS
MODING AND ATTENUATION
For the flattest response over the largest possible bandwidth, an RF connector should exhibit a constant impedance throughout its length. This requirement is satisfied by maintaining constant dimensions throughout and by filling the intervening volume uniformly with a homogeneous dielectric. As straightforward and obvious as this requirement may seem, we will shortly see that there is at least one extremely popular connector that fails to meet it.
The best and most commonly used RF connectors are coaxial in structure. One important attribute of coaxial geometries is their self-shielding nature; radiation losses are therefore not an issue. One must always take care, however, to maintain transverse electromagnetic (TEM) propagation in which, you might recall from undergraduate electromagnetics courses, neither E nor H has a component in the direction of propagation. At sufficiently high frequencies, non-TEM propagation can occur, and the energy stored or propagated in higher-order modes can cause dramatic impedance changes.
In this chapter, we consider the problems of efficiently and linearly delivering RF power to a load. Simple, scaled-up versions of small-signal amplifiers are fundamentally incapable of high efficiency, so we have to consider other approaches. As usual, there are trade-offs – here, between spectral purity (distortion) and efficiency. In a continuing quest for increased channel capacity, more and more communications systems employ amplitude and phase modulation together. This trend brings with it an increased demand for much higher linearity (possibly in both amplitude and phase domains). At the same time, the trend toward portability has brought with it increased demands for efficiency. The variety of power amplifier topologies reflects the inability of any single circuit to satisfy all requirements.
SMALL- VERSUS LARGE-SIGNAL OPERATING REGIMES
Recall that an important compromise is made in analyzing circuits containing nonlinear devices (such as transistors). In exchange for the ability to represent, say, an inherently exponential device with a linear network, we must accept that the model is valid only for “small” signals. It is instructive to review what is meant by “small” and to define quantitatively a boundary between “small” and “large.”
In what follows, we will decompose signals into their DC and signal components. To keep track of which is which, we will use the following notational convention: DC variables are in upper case (with upper-case subscripts); small-signal components are in lower case with lower-case subscripts.
The title of this chapter should raise a question or two: Precisely what is the definition of RF? Of microwave? We use these terms in the preceding chapter, but purposely without offering a quantitative definition. Some texts use absolute frequency as a discriminator (e.g., “microwave is anything above 1 GHz”). However, the meaning of those words has changed over time, suggesting that distinctions based on absolute frequency lack fundamental weight. Indeed, in terms of engineering practice and design intuition, it is far more valuable to base a classification on a comparison of the physical dimensions of a circuit element with the wavelengths of signals propagating through it.
When the circuit's physical dimensions are very small compared to the wavelengths of interest, we have the realm of ordinary circuit theory, as we will shortly understand. We will call this the quasistatic, lumped, or low-frequency realm, regardless of the actual frequency value. The size inequality simplifies Maxwell's equations considerably, allowing one to invoke the familiar concepts of inductances, capacitances, and Kirchhoff 's “laws” of current and voltage.
If, on the other hand, the physical dimensions are very large compared to the wavelengths of interest, then we say that the system operates in the classical optical regime – whether or not the signals of interest correspond to visible light. Devices used to manipulate the energy are now structures such as mirrors, polarizers, lenses, and diffraction gratings. Just as in the quasistatic realm, the size inequality enables considerable simplifications in Maxwell's equations.
Given the effort expended in avoiding instability in most feedback systems, it would seem trivial to construct oscillators. Murphy, however, is not so kind; the situation is a lot like bringing an umbrella in order to make it rain. An old joke among RF engineers is that every amplifier oscillates, and every oscillator amplifies.
In this chapter, we consider several aspects of oscillator design. First, we show why purely linear oscillators are a practical impossibility. We then present a linearization technique utilizing describing functions that greatly simplify analysis, and help to develop insight into how nonlinearities affect oscillator performance. With describing functions, it is straightforward to predict both the frequency and amplitude of oscillation.
A survey of resonator technologies is included, and we also revisit PLLs, this time in the context of frequency synthesizers. We conclude this chapter with a survey of oscillator architectures. The important issue of phase noise is considered in detail in Chapter 17.
THE PROBLEM WITH PURELY LINEAR OSCILLATORS
In negative feedback systems, we aim for large positive phase margins to avoid instability. To make an oscillator, then, it might seem that all we have to do is shoot for zero or negative phase margins. We may examine this notion more carefully with the root locus for positive feedback sketched in Figure 15.1 This locus recurs frequently in oscillator design because it applies to a two-pole bandpass resonator with feedback.
The design of amplifiers for signal frequencies in the microwave bands involves more detailed considerations than at lower frequencies. One simply has to work harder to obtain the requisite performance when approaching the inherent limitations of the devices themselves. Additionally, the effect of ever-present parasitic capacitances and inductances can impose serious constraints on achievable performance. Indeed, parasitics are so prominent at RF that an important engineering philosophy is to treat parasitics as circuit elements to be exploited, rather than fought.
Having evolved during an era where modeling and simulation capabilities were primitive, traditional microwave amplifier design largely ignores the underlying details of device behavior. Instead, S-parameter sets describe the transistor's macroscopic behavior over frequency. In doing so, vast simplifications can result, but at a cost. By effectively insulating the engineer from the device physics, it is difficult to extrapolate beyond the given data set. Furthermore, real transistors are nonlinear, so the S-parameter characterizations are strictly relevant only for the bias conditions used in their generation.
Because simulation and modeling tools have advanced considerably since that time, we will consider the design of both broadband and narrowband amplifiers from a device-level point of view, rather than with the more traditional Smith-chart–based approach. Thus, we will not spend time examining stability and gain circles, for example. Readers interested in the classical approach are directed to any of a number of representative texts that cover the topic in detail.
With the growing sophistication in semiconductor device fabrication has come a rapid expansion in the number and types of transistors suitable for use at microwave frequencies. At one time, the RF engineer's choices were a bipolar or possibly a junction field-effect transistor. The palette of options has since grown to a dizzying collection of MOSFETs, VMOS, UMOS, LDMOS, MESFETs, pseudomorphic and metamorphic HEMTs (MODFETs), and HBTs, all offered in an ever-expanding variety of materials systems. We'll attempt to provide a description of these types of devices, starting with a deciphering of their abbreviations. Then we'll focus on a small subset of these devices in an expanded discussion of modeling.
The bipolar transistor was discovered – not invented – in December of 1947 while the Bell Labs duo of John Bardeen and Walter Brattain was attempting to build aMOS field-effect transistor at the behest of their boss, William Shockley. Their repeated failures led them to suspect that the problem lay with the surface, where the neat periodicity of the bulk terminates abruptly, leaving unsatisfied bonds to latch onto contaminants. To verify this “surface state” hypothesis, they undertook a detailed study of semiconductor surface phenomena. One of their experiments, designed to modulate the postulated surface states, itself happened to exhibit power gain. It wasn't the MOSFET they had been trying to build; it was a germanium point-contact bipolar transistor. Its behavior was never quantitatively understood, and repeatability of characteristics was only a fantasy.
We asserted in Chapter 15 that tuned oscillators produce outputs with higher spectral purity than relaxation oscillators. One straightforward reason is simply that a high-Q resonator attenuates spectral components removed from the center frequency. As a consequence, distortion is suppressed, and the waveform of a well-designed tuned oscillator is typically sinusoidal to an excellent approximation.
In addition to suppressing distortion products, a resonator also attenuates spectral components contributed by sources such as the thermal noise associated with finite resonator Q, or by the active element(s) present in all oscillators. Because amplitude fluctuations are usually greatly attenuated as a result of the amplitude stabilization mechanisms present in every practical oscillator, phase noise generally dominates – at least at frequencies not far removed from the carrier. Thus, even though it is possible to design oscillators in which amplitude noise is significant, we focus primarily on phase noise here. We show later that a simple modification of the theory allows for accommodation of amplitude noise as well, permitting the accurate computation of output spectrum at frequencies well removed from the carrier.
Aside from aesthetics, the reason we care about phase noise is to minimize the problem of reciprocal mixing. If a superheterodyne receiver's local oscillator is completely noise-free, then two closely spaced RF signals will simply translate downward in frequency together. However, the local oscillator spectrum is not an impulse and so, to be realistic, we must evaluate the consequences of an impure LO spectrum.