To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Ariel Lipson, Imperial College of Science, Technology and Medicine, London,Stephen G. Lipson, Technion - Israel Institute of Technology, Haifa,Henry Lipson, University of Manchester Institute of Science and Technology
This chapter will discuss the electromagnetic wave as a most important example of the general treatment of wave propagation presented in Chapter 2. We shall start at the point where the elementary features of classical electricity and magnetism have been summarized in the form of Maxwell's equations, and the reader's familiarity with the steps leading to this formulation will be assumed (see, for example, Grant and Phillips (1990), Jackson (1999), Franklin (2005)). It is well known that Maxwell's formulation included for the first time the displacement current ∂D/∂t, the time derivative of the fictitious displacement field D = ∈0E+P, which is a combination of the applied electric field E and the electric polarization density P. This field will turn out to be of prime importance when we come to extend the treatment in this chapter to wave propagation in anisotropic media in Chapter 6.
In this chapter we shall learn:
about the properties of electromagnetic waves in isotropic linear media;
about simple-harmonic waves with planar wavefronts;
about radiation of electromagnetic waves;
the way in which these waves behave when they meet the boundaries between media: the Fresnel coefficients for reflection and transmission;
about optical tunnelling and frustrated total internal reflection;
about electromagnetic waves in conducting media;
some consequences of the time-reversal symmetry of Maxwell's equations;
about electromagnetic momentum, radiation pressure and optical tweezers;
about angular momentum of waves that have spiral wavefronts, instead of the usual plane wavefronts;
The helix antenna discussed in the previous chapter used a new type of element to model surfaces. The theory underlying this is described in this chapter. The basic theory is quite complex, and general implementations are especially challenging. However, by choosing a suitable problem, it proves possible to undertake a limited implementation of a three-dimensional scattering problem, using a basis function defined on a triangular patch known as the RWG element. This is named after Rao, Wilton and Glisson, who introduced the element in their classic 1982 paper [1]. It represented a new type of element, the vector or edge-based element, and a closely related class of element was also under development for finite element applications at that time, although it would be some years before the connection was fully appreciated. (This will be pursued in more detail in the later coverage of the FEM.) The RWG element underlies the surface treatment of modern codes such FEKO (although not NEC), and some examples of using existing codes (in particular FEKO) to compute scattering from more general surfaces will further illustrate this.
We will also see that not only can perfectly (or highly) conducting structures be efficiently modelled using surface currents, but also homogeneous dielectric and/or magnetic regions, using fictitious equivalent currents. (We will even briefly describe how inhomogeneous bodies can be modelled using volumetric currents, but note at the outset that this is not one of the strong points of the MoM.)
Ariel Lipson, Imperial College of Science, Technology and Medicine, London,Stephen G. Lipson, Technion - Israel Institute of Technology, Haifa,Henry Lipson, University of Manchester Institute of Science and Technology
We use optics overwhelmingly in our everyday life: in art and sciences, in modern communications and medical technology, to name just a few fields. This is because 90% of the information we receive is visual. The main purpose of this book is to communicate our enthusiasm for optics, as a subject both practical and aesthetic, and standing on a solid theoretical basis.
We were very pleased to be invited by the publishers to update Optical Physics for a fourth edition. The first edition appeared in 1969, a decade after the construction of the first lasers, which created a renaissance in optics that is still continuing. That edition was strongly influenced by the work of Henry Lipson (1910–1991), based on the analogy between X-ray crystallography and optical Fraunhofer diffraction in the Fourier transform relationship realized by Max von Laue in the 1930s. The text was illustrated with many photographs taken with the optical diffractometers that Henry and his colleagues built as ‘analogue computers’ for solving crystallographic problems. Henry wrote much of the first and second editions, and was involved in planning the third edition, but did not live to see its publication. In the later editions, we have continued the tradition of illustrating the principles of physical optics with photographs taken in the laboratory, both by ourselves and by our students, and hope that readers will be encouraged to carry out and further develop these experiments themselves.
Six years after the first edition was prepared, it was clear that a revised edition was in order. Continued advances in computational electromagnetics, new capabilities in commercial codes, the continual increase in computational resources, challenging new problems and a new generation of research students and engineers required new material.
Since the first edition appeared, several trends can be noted in the field. Firstly, in terms of commercial companies, there has been a significant shake-out in the market. Whilst not pretending to offer encyclopedic coverage of the large number of commercial codes available, the three codes whose application is discussed in this book, viz. CST, FEKO and HFSS, have further established themselves as amongst the market leaders in their regions of application during this period. These codes have evolved continuously, and this evolution is reflected in places in this revised edition. Secondly, whilst no fundamentally new techniques have been introduced (either in the field in general or in commercial codes in particular), a large number of additional features, improvements and enhancements have continued to extend the utility of these packages. Thirdly, after more than two decades of continual increase in CPU clock speeds in personal computers (which now dominate computational engineering), the last few years have seen clock speeds not only stagnate, but in some cases actually decrease. However, Moore's law has continued to hold sway, but in terms of multi-core and multi-processor systems. Exploiting parallelism has become essential to benefit from new hardware.
Throughout this book, checking convergence numerically has been continually emphasized. However, we have not discussed the more theoretical issues of whether the underlying numerical formulations are indeed convergent, in the sense that the approximate numerical solution fN of the continuous operator equation Lf = g has the property fN → f as N → ∞. The aim of this appendix is to give a brief summary of the current status of this – which readers may be surprised to learn is far from a closed subject.
With the FDTD, the Lax equivalence theorem (discussed in Chapter 2) provides us with confidence that refining the FDTD mesh will indeed result in a convergent solution. With the FEM, work in applied mechanics has provided a rich set of convergence results — although we should note that convergence for high-frequency electromagnetics problems is often in terms of the energy norm, as discussed in Chapter 12. This is a slightly weaker statement of convergence, since the energy norm does not satisfy all the properties of the norm. Also, these proofs are usually in terms of interpolation error; as has been noted, dispersion (or pollution) error is a different problem specific to the differential equation based solvers, but can usually be controlled by adequate meshing.
Ariel Lipson, Imperial College of Science, Technology and Medicine, London,Stephen G. Lipson, Technion - Israel Institute of Technology, Haifa,Henry Lipson, University of Manchester Institute of Science and Technology
Optics is the study of wave propagation and its quantum implications, the latter now being generally called ‘photonics’. Traditionally, optics has centred around visible light waves, but the concepts that have developed over the years have been found increasingly useful when applied to many other types of wave, both inside and outside the electromagnetic spectrum. This chapter will first introduce the general concepts of classical wave propagation, and describe how waves are treated mathematically.
However, since there are many examples of wave propagation that are difficult to analyze exactly, several concepts have evolved that allow wave propagation problems to be solved at a more intuitive level. The latter half of the chapter will be devoted to describing these methods, due to Huygens and Fermat, and will be illustrated by examples of their application to wave propagation in scenarios where analytical solutions are very hard to come by. One example, the propagation of light waves passing near a heavy massive body, called ‘gravitational lensing’ is shown in Fig. 2.1; the figure shows two images of distant sources distorted by such gravitational lenses, taken by the Hubble Space Telescope, compared with experimental laboratory simulations. Although analytical methods do exist for these situations, Huygens' construction makes their solution much easier (§2.8).
A wave is essentially a temporary disturbance in a medium in stable equilibrium. Following the disturbance, the medium returns to equilibrium, and the energy of the disturbance is dissipated in a dynamic manner.
Ariel Lipson, Imperial College of Science, Technology and Medicine, London,Stephen G. Lipson, Technion - Israel Institute of Technology, Haifa,Henry Lipson, University of Manchester Institute of Science and Technology
There are two sorts of textbooks. On the one hand, there are works of reference to which students can turn for the clarification of some obscure point or for the intimate details of some important experiment. On the other hand, there are explanatory books which deal mainly with principles and which help in the understanding of the first type.
We have tried to produce a textbook of the second sort. It deals essentially with the principles of optics, but wherever possible we have emphasized the relevance of these principles to other branches of physics – hence the rather unusual title. We have omitted descriptions of many of the classical experiments in optics – such as Foucault's determination of the velocity of light – because they are now dealt with excellently in most school textbooks. In addition, we have tried not to duplicate approaches, and since we think that the graphical approach to Fraunhofer interference and diffraction problems is entirely covered by the complex-wave approach, we have not introduced the former.
For these reasons, it will be seen that the book will not serve as an introductory textbook, but we hope that it will be useful to university students at all levels. The earlier chapters are reasonably elementary, and it is hoped that by the time those chapters which involve a knowledge of vector calculus and complex-number theory are reached, the student will have acquired the necessary mathematics.
This book is designed to serve as an introduction to computational electromagnetics for radio-frequency applications. It assumes the reader has completed typical undergraduate courses in electromagnetic field theory, and has some basic knowledge of antenna design and microwave systems.
For readers in a hurry, who already know which of the techniques discussed they would like to learn more about, it is possible to go directly to the relevant chapters, but it would nonetheless be useful first to read the introductory chapter. For those in a hurry, but who need first to find out which method (or methods) to use, this chapter is essential reading.
For readers who intend working through most of the book, it would be best to work through it in the sequence presented, although the chapters on the Sommerfeld formulation and practical applications thereof could be omitted without interrupting the sequence of presentation. A more detailed outline of the book may be found in Section 1.12; this will also assist readers to locate rapidly the parts of the book of interest to them.
At the end of each chapter, a list of references linked to the chapter topic is presented, for further reading and study.
Ariel Lipson, Imperial College of Science, Technology and Medicine, London,Stephen G. Lipson, Technion - Israel Institute of Technology, Haifa,Henry Lipson, University of Manchester Institute of Science and Technology
Bessel functions come into wave optics because many optical elements – lenses, apertures, mirrors – are circular. We have met Bessel functions in several places (§8.3.4, §8.7, §12.2, §12.6.4 for example), although since most students are not very familiar with them (and probably becoming less so with the ubiquity of computers) we have restricted our use of them as far as possible. The one unavoidable meeting is the Fraunhofer diffraction pattern of a circular aperture, the Airy pattern, which is the diffraction-limited point spread function of an aberration-free optical system (§12.2). Another topic that involves the use of Bessel functions is the Fourier analysis of phase functions, in which the function being transformed contains the phase in an exponent. We met such a situation when we studied the acousto-optic effect, where a sinusoidal pressure wave affects directly the phase of the optical transmission function.
In this appendix we simply intend to acquaint the reader with the results that are necessary for elementary wave optics. The proofs can be found in the treatise by Watson (1958) and other places.
With the theoretical background now established, one is in a position to start using commercial and public domain MoM programs intelligently. In this chapter, we will discuss primarily the application of the commercial code FEKO for antenna modelling, but will also discuss the use of the public domain code NEC-2 in this regard. Other than FEKO, few commercial programs (other than some proprietary NEC-2 extensions) provide good support for modelling thin-wire antennas, the topic of this chapter; such antennas are still very widely used indeed. For commercial programs, material is usually available to assist novice users to get started with the codes. Hence we will not describe the basic concepts of entering the geometry of the problem, including the source, and specifying parameters such as operating frequency and radiation patterns, since these vary from program to program, indeed quite often from release to release, and are usually quite well documented by the suppliers. However, in the case of NEC-2, some comments are in order.
NEC-2 is a “card driven” program, dating back to the days of “decks” of punched cards. A NEC model is described by a geometry file, usually with a .nec extension. An example is given in Fig. 5.1. If using NEC in this form, one must obtain a copy of the user manual [1].
Ariel Lipson, Imperial College of Science, Technology and Medicine, London,Stephen G. Lipson, Technion - Israel Institute of Technology, Haifa,Henry Lipson, University of Manchester Institute of Science and Technology
The coherence of a wave describes the accuracy with which it can be represented by a pure sine wave. So far we have discussed optical effects in terms of coherent waves whose wave-vector k and frequency ω are known exactly; in this chapter we intend to investigate the way in which uncertainties and small fluctuations in k and ω can affect the observations in optical experiments. Waves that appear to be pure sine waves only if they are observed in a limited space or for a limited period of time are called partially coherent waves, and we shall see in this chapter how we can measure deviations of such waves from their pure counterparts, and what these measurements tell us about the source of the waves.
The classical measure of coherence was formulated by Zernike in 1938 but had its roots in much earlier work by Fizeau and Michelson in the late nineteenth century. Both of these scientists realized that the contrast of interference fringes between waves travelling by two different paths from a source to an observer would be affected by the size, shape and spectrum of the source. Fizeau suggested, and Michelson carried out, experiments which showed that the angular diameter of a star could indeed be measured by observing the degradation of the contrast of interference fringes seen when using the star as a source of light (§11.8.1).
Michael Faraday (1791–1867) made foundational contributions in the fields of physics and chemistry, notably in relation to electricity. One of the greatest scientists of his day, Faraday held the position of Fullerian Professor of Chemistry at the Royal Institution of Great Britain for over thirty years. Not long after his death, his friend Henry Bence Jones attempted 'to join together his words, and to form them into a picture of his life which may be almost looked upon as an autobiography'. Jones' compilation of Faraday's manuscripts, letters, notebooks, and other writings resulted in this Life and Letters (1870) which remains an important resource for learning more about one of the most influential scientific experimentalists of the nineteenth century. Volume 1 (1791–1830) covers Faraday's earliest years as an errand boy and bookbinder's apprentice, his arrival at the Royal Institution as an assistant and his early publications on electricity.
Basics of Holography is a general introduction to the subject written by a leading worker in the field. It begins with the theory of holographic imaging, the characteristics of the reconstructed image, and the various types of holograms. Practical aspects of holography are then described, including light sources, the characteristics of recording media and recording materials, as well as methods for producing different types of holograms and computer-generated holograms. Finally, important applications of holography are discussed, such as high-resolution imaging, holographic optical elements, information processing, and holographic interferometry. The book includes comprehensive reference sections and appendices summarizing some useful mathematical results. Numerical problems with their solutions are provided at the ends of chapters. This is an invaluable resource for advanced undergraduate and graduate students as well as researchers in science and engineering who would like to learn more about holography and its applications in science and industry.
In this book, first published in 1968, King and his co-authors develop a theory of the behaviour of arrays of rod-shaped antennas such as are used to achieve directive transmission and reception of radio waves for use in communication between points on the earth, between the earth and a space vehicle, or in radio astronomy. They use quantitative analysis of arrays of practical types and wide range of lengths over a wide frequency band, which makes possible the design of new arrays with desired characteristics. After the introductory chapter reviewing the foundations and conventions antenna theory, each subsequent chapter takes into account the authors own particular theories on isolated antennas, two-coupled antennas, N-element circular array, N-element curtain array of identical elements, to arrays containing elements of different lengths and finally to planar and three-dimensional arrays. The final chapter is concerned with problems of measurement and the correlation of theory with experiment.
There has been an increase in interest worldwide in fusion research over the last decade and a half due to the recognition that a large number of new, environmentally attractive, sustainable energy sources will be needed to meet ever increasing demand for electrical energy. Based on a series of course notes from graduate courses in plasma physics and fusion energy at MIT, the text begins with an overview of world energy needs, current methods of energy generation, and the potential role that fusion may play in the future. It covers energy issues such as the production of fusion power, power balance, the design of a simple fusion reactor and the basic plasma physics issues faced by the developers of fusion power. This book is suitable for graduate students and researchers working in applied physics and nuclear engineering. A large number of problems accumulated over two decades of teaching are included to aid understanding.
This book is devoted to the physics of electronic fluctuations (noise) in solids and covers almost all important examples of this phenomenon. It is comprehensive, intelligible and well illustrated. Emphasis is given to the main concepts, supported by many fundamental experiments which have become classics, to physical mechanisms of fluctuations, and to conclusions on the nature and magnitude of noise. The book also includes a comprehensive and complete review of flicker (1/f) noise in the literature. It will be useful to graduate students and researchers in physics and electronic engineering, and especially those carrying out research in the fields of noise phenomena and highly sensitive electronic devices, for example radiation detectors, electronic devices for low-noise amplifiers and quantum magnetometers (SQUIDS).
This book is a comprehensive text on the theory of the magnetic recording process. It gives the reader a fundamental, in-depth understanding of all the essential features of the writing and retrieval of information for both high density disk recording and tape recording. The material is timely because magnetic recording technology is currently undergoing rapid advancement in systems capacity and data rate. The competing technologies of longitudinal and perpendicular recording are given parallel treatments throughout this book. A simultaneous treatment of time and frequency response is given to facilitate assessment of signal processing schemes. In addition to covering basic issues, the author discusses key systems questions of non-linearities and overwrite. The emerging technology of magnetoresisitive heads is analysed separately and three chapters are devoted to various aspects of medium noise. This unique book will be valuable as a course text for both senior undergraduates and graduate students. It will also be of value to research and development scientists in the magnetic recording industry. The book includes a large number of homework problems.
Electromagnetic Scintillation describes the phase and amplitude fluctuations imposed on signals that travel through the atmosphere. The volumes that make up Electromagnetic Scintillation will provide a modern reference and comprehensive tutorial, treating both optical and microwave propagation and integrating measurements and predictions at each step of the development. This first volume deals with phase and angle-of-arrival measurement errors, accurately described by geometrical optics. It will be followed by a further volume examining weak scattering. In this book, measured properties of tropospheric and ionospheric irregularities are reviewed first. Electromagnetic fluctuations induced by these irregularities are then estimated for a wide range of applications. The book will be of interest to those working in the resolution of astronomical interferometers and large single-aperture telescopes, as well as synthetic aperture radars and laser pointing/tracking systems. It is also directly relevant to those working in laser metrology, GPS location accuracy, and terrestrial and satellite communications.
There is a wide variety of optical instruments where the human eye forms an integral part of the overall system. This book provides a detailed description of the visual ergonomics of such instruments. The book begins with a section on image formation and basic optical components. The various optical instruments that can be adequately described using geometrical optics are then discussed, followed by a section on diffraction and interference, and the instruments based on these effects. There are separate sections devoted to ophthalmic instruments and aberration theory, with a final section covering visual ergonomics in depth. Containing many problems and solutions, this book will be of great use to undergraduate and graduate students of optometry, optical design, optical engineering, and visual science, and to professionals working in these and related fields.