To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
As mentioned several times already, the particle character of light is best illustrated by the photoelectric effect. This effect can be exploited in the detection of single photons by photocounting. The analysis of such counting data allows us, as will be discussed in detail in this chapter, to gain a deeper insight into the properties of electromagnetic fields. We can recognize the “fine structure” of the radiation field – in the form of fluctuation processes – which was hidden from us when using previous techniques relying only on the eye or a photographic plate, i.e. techniques limited to time averaged intensity measurements.
The credit for developing the basic technique for intensity fluctuation measurements goes to the British scientists R. Hanbury Brown and R. Q. Twiss, who became the fathers of a new optical discipline which investigates statistical laws valid for photocounting under various physical situations. When we talk of studies of “photon statistics” it is these investigations that we are referring to.
Interestingly enough, it was a practical need, namely the improvement in experimental possibilities of measuring the (apparent) diameters of fixed stars, that gave rise to the pioneering work by Hanbury Brown and Twiss. Because the topic is physically exciting, we will go into more detail.
It is well known that the angular diameters of fixed stars – observed from Earth – appear to be so small that the available telescopes are not able to resolve the stars spatially.
All the 50 years of conscious pondering did not bring me nearer to the answer to the question “What are light quanta”. Nowadays every rascal believes, he knows it, however, he is mistaken.
Albert Einstein (1951 in a letter to M. Besso)
The rapid technological development initiated by the invention of the laser, on the one hand, and the perfection attained in the fabrication of photodetectors, on the other hand, gave birth to a new physical discipline known as quantum optics. A variety of exciting experiments suggested by ingenious quantum theorists were performed that showed specific quantum features of light. What we can learn from those experiments about the miraculous constituents of light, the photons, is a central question in this book. Remarkably, the famous paradox of Einstein, Podolsky and Rosen became a subject of actual experiments too. Here photon pairs produced in entangled states are the actors.
The book gives an account of important achievements in quantum optics. My primary goal was to contribute to a physical understanding of the observed phenomena that often defy the intuition we acquired from our experience with classical physics. So, unlike conventional textbooks, the book contains much more explaining text than formulas. (Elements of the mathematical description can be found in the Appendix.) The translation gave me a welcome opportunity to update the book. In particular, chapters on the Franson experiment and on quantum teleportation have been included.
After reviewing the main characteristics of the classical description of light, let us discuss those aspects of the quantization of the electromagnetic field which are of relevance for the analysis of the phenomena we are interested in. It seems a reasonable place to start to make clear the fundamental difference between the classical and the quantum mechanical description of nature; we will come across this difference many times when discussing experiments, and it will often give us a headache. We have to deal with the physical meaning of what is called uncertainty.
The starting point of the classical description is the conviction that natural processes have a “factual” character. This means that physical variables such as the position or momentum of a particle have, in each single case, a well defined (in general, time dependent) value. However, it will not always be possible to measure all the appropriate variables (for instance, the instantaneous electric field strength of a radiation field); furthermore under normal circumstances we are able to measure only with a finite precision. Hence the basic credo of classical physics should be given in the following form: we are justified in imagining a world with variables possessing well defined values which are not known precisely (or not known at all). In doing this we are not forming any conclusions that contradict our everyday experiences.
This is the fundamental concept of classical statistics: we are satisfied with the use of probability distributions for the variables we are interested in, not from fundamental but purely practical reasons.
Most probably all people, even though they belong to different cultures, would agree on the extraordinary role that light – the gift of the Sun-god – plays in nature and in their own existence. Optical impressions mediated by light enable us to form our views of the surrounding world and to adapt to it. The warming power of the sun's rays is a phenomenon experienced in ancient times and still appreciated today. We now know that the sun's radiation is the energy source for the life cycles on Earth. Indeed, it is photosynthesis in plants, a complicated chemical reaction mediated by chlorophyll, that forms the basis for organic life. In photosynthesis carbon dioxide and water are transformed into carbohydrates and oxygen with the help of light. Our main energy resources, coal, oil and gas, are basically nothing other than stored solar energy.
Finally, we should not forget how strongly seeing things influences our concepts of and the ways in which we pursue science. We can only speculate whether the current state of science could have been achieved without sight, without our ability to comprehend complicated equations, or to recognize structures at one glance and illustrate them graphically, and record them in written form.
The most amazing properties, some of which are completely alien to our common experiences with solid bodies, can be ascribed to light: it is weightless; it is able to traverse enormous distances of space with incredible speed (Descartes thought that light spreads out instantaneously); without being visible itself, it creates, in our minds, via our eyes, a world of colors and forms, thus “reflecting” the outside world.
While the geometers derive their theorems from secure and unchallengeable principles, here the principles prove true through the deductions one draws from them.
Christian Huygens (Traité de la Lumiére)
Christian Huygens (1629–1695) is rightfully considered to be the founder of the wave theory of light. The fundamental principle enabling us to understand the propagation of light bears his name. It has found its way into textbooks together with the descriptions of reflection and refraction which are based on it.
However, when we make the effort and read Huygens' Treatise of Light (Huygens, 1690) we find to our surprise that his wave concept differs considerably from ours. When we speak of a wave we mean a motion periodic in space and time: at each position the displacement (think about a water wave, for instance) realizes a harmonic oscillation with a certain frequency ν, and an instantaneous picture of the whole wave shows a continuous sequence of hills and valleys. However, this periodicity property which seems to us to be a characteristic of a wave is completely absent in Huygens' wave concept. His waves do not have either a frequency or a wavelength! Huygens' concept of wave generation is that of a (point-like) source which is, at the same time, the wave center inducing, through “collisions” that “do not succeed one another at regular intervals,” a “tremor” of the ether particles.
Interference phenomena are certainly among the most exciting phenomena in the whole of physics. In the following we will concentrate mainly on interference of weak fields; i.e. the beams contain, on average, only a few photons.
The principle of classical interference is as follows: a light beam is split by an optical element, for example by a semitransparent mirror or a screen with several very small apertures, into two or more partial beams. These beams will take different paths and are then reunited and form interference patterns. The first step, the splitting of the beam into partial beams, plays a decisive role; light beams coming from different sources (or from different spatial areas of the same source) do not interfere with each other!
We start our discussion of interference with an analysis of the action of a beamsplitter. To form a realistic idea of this device, let us imagine a semitransparent mirror. (Our considerations apply equally well to a screen with two apertures. We could also generalize to cases of unbalanced mirrors, with reflectivity different from 1/2, or screens with apertures of different size.)
The classical wave picture can describe interference phenomena without any great effort: the incoming beam is split into the reflected and the transmitted partial wave, and each of these waves contains half of the energy. The process of splitting becomes conceptually difficult only when we think of the beam as consisting of spatially localized energy packets, or photons.
Throughout his life, Albert Einstein was never reconciled to quantum theory being an essentially indeterministic description of natural processes, even though he himself contributed fundamental ideas to its development. “God does not play dice” was his inner conviction. In his opinion, quantum theory was only makeshift. His doubts about the completeness of the quantum mechanical description were expressed concisely in a paper published jointly with Podolsky and Rosen (Einstein, Podolsky and Rosen, 1935). This paper analyzes a sophisticated Gedanken experiment, now famous as the Einstein–Podolsky–Rosen paradox, which has excited theoreticians ever since.
The Gedanken experiment was recently realized in a laboratory. The analyzed objects are photon pairs – and this is what has motivated us to dedicate a chapter to this problem which has bearing upon the foundations of quantum mechanics. The photon pairs are formed by two photons generated in sequence (in a so-called cascade transition, as shown in Fig. 11.1). Due to the validity of the angular momentum conservation law (discussed in Section 6.9) for the elementary emission process, the two photons exhibit specifically quantum mechanical correlations, which are incompatible with the classical reality concept, as will be discussed in detail below.
How do the correlations appear in detail? Let us assume the initial state of the atom to be a state with angular momentum (spin) J = 0, the intermediate state to have angular momentum J = 1, and the final state to have again J = 0.
How can we construct a picture of the photon from the wealth of observation material available to us? The photon appears to have a split personality: it is neither a wave nor a particle but something else which, depending on the experimental situation, exhibits a wave- or a particle-like behavior. In other words, in the photon (as in material particles such as the electron) the particle–wave dualism becomes manifest. Whereas classically the wave and the particle pictures are separate, quantum mechanics accomplishes a formal synthesis through a unified mathematical treatment.
Let us look first at the wave aspect familiar from classical electrodynamics, which seems to be the most natural description. It makes all the different interference phenomena understandable, such as the “interference of the photon with itself” on the one hand and the appearance of spatial and temporal intensity correlations in a thermal radiation field on the other (which are obviously brought about by superposition of elementary waves emitted independently from different atoms). It might come as a surprise (at least for those having quantum mechanical preconceptions) that the classical theory is valid down to arbitrarily small intensities: the visibility of the interference pattern does not deteriorate even for very small intensities – the zero point fluctuations of the electromagnetic field advocated by quantum mechanics do not have a disturbing effect – and is valid not only for conventional interference experiments but also for interference between independently generated light beams (in the form of laser light).
The essence of the Einstein–Podolsky–Rosen experiment analyzed in the preceding chapter is our ability to provide two observers with unpolarized light beams, consisting of sequences of photons, which are coupled in a miraculous way. When both observers choose the same measurement apparatus – a polarizing prism with two detectors in the two output ports, whereby the orientation of the prism is set arbitrarily but identically for both observers – their measurement results are identical. The measurement result, characterized, say, by “0” and “1”, is a genuine random sequence – the quantum mechanical randomness rules unrestricted – from which we can form a sequence of random numbers using the binary number system. The experimental setup thus allows us to deliver simultaneously to the two observers an identical series of random numbers. This would be, by itself, not very exciting. Mathematical algorithms can be used to generate random numbers, for example the digit sequence of the number π, which can be calculated up to an arbitrary length. Even though we cannot be completely sure that such a sequence is absolutely random, such procedures are sufficient for all practical purposes. The essential point of the Einstein–Podolsky–Rosen experiment is that “eavesdroppers” cannot listen to the communication without being noticed by the observers. When eavesdroppers perform an observation on the photons sent, they inevitably destroy the subtle quantum mechanical correlations, and this damage is irreparable.
The conclusion by Maxwell, based on theoretical considerations, that light is, by its character, an electromagnetic process, is surely a milestone in the history of optics. By formulating the equations bearing his name, Maxwell laid the foundations for the apparently precise description of all optical phenomena. The classical picture of light is characterized by the concept of the electromagnetic field. At each point of space, characterized by a vector r, and for each time instant t, we have to imagine vectors describing both the electric and the magnetic field. The time evolution of the field distribution is described by coupled linear partial differential equations: the Maxwell equations.
The electric field strength has a direct physical meaning: if an electrically charged body is placed into the field, it will experience a force given by the product of its charge Q and the electric field strength E. (To eliminate a possible distortion of the measured value by the field generated by the probe body itself, its charge should be chosen to be sufficiently small.) Analogously, the magnetic field strength H, more precisely the magnetic induction B = μH (where μ is the permeability), describes the mechanical force acting on a magnetic pole (which is thought of as isolated). Also, the field has an energy content, or, more correctly (because in a precise field theory we can think only about energy being distributed continuously in space), a spatial energy density.
Whereas receiving radio waves is a macroscopic process and hence belongs to the area of classical electrodynamics – in a macroscopic antenna an electric voltage is induced whereby a large number of electrons follow the electric field strength of the incident wave, in a kind of collective motion – the detection of light, so far as the elementary process is concerned, takes place in microscopic type objects such as atoms and molecules. As a consequence, the response of an optical detector is determined by the microstructure of matter. In particular, it is impossible – due to the enormously high frequency of light (in the region of 1015 Hz) – to measure the electric field strength. What is in fact detectable is the energy transfer from the radiation field to the atomic receiver, and this allows us to draw conclusions about the (instantaneous) intensity of light.
We might ask what we can say about the above-mentioned absorption process from an experimentalist's point of view. Among the basic experiences that provide an insight into the structure of the micro-cosmos is the resonance character of the interaction between light and an atomic system. The atomic system, when hit by light, behaves like a resonator with certain resonance frequencies; i.e. it becomes excited (takes up energy) only when the light frequency coincides with a value that is characteristic for the particular atom. Hence, an incident light wave with an initial broadband frequency spectrum that has passed through a gas exhibits in its spectrum dark zones, the so-called absorption lines.
In the preface I think it is better if I abandon the formality of the text and address you the reader, directly.
As I hope you will have gathered from the title, this is a book that attempts to lay out the basis for the design of analog optical links. Let me give an example that should drive home this point. It is customary in books on lasers to start with an extensive presentation based on the rate equations (do not worry at this point if you do not know what these are). In this book we also discuss lasers, but the rate equations are relegated to an appendix. Why? Because in over 15 years of link design, I have never used the rate equations to design a link! So why all the emphasis on the rate equations in other texts? Probably because they are targeted more to, or at least written by, device designers. The view in this book is that you are a user of devices, who is interested in applying them to design of a link. Of course to use a device most effectively, or even to know which device to choose for a particular link design, requires some knowledge of the device, beyond its terminal behavior. To continue the laser example, it is important to know not only what the laser frequency response is, but also how it changes with bias.
In this chapter we develop the small-signal relationships between the RF and optical parameters for the most common electro-optic devices used in intensity modulation, direct detection links. There are numerous device parameters we could use for this task; we concentrate here – as we will throughout this book – on those parameters that can be measured and selected by the link designer – as opposed to those parameters that can only be measured and controlled by the device designer.
To provide the basis for comparing these and future devices, we develop a figure of merit for optical modulators and detectors: the RF-to-optical incremental modulation efficiency for modulation devices and its converse the optical-to-RF incremental detection efficiency for photodetection devices. These efficiencies are useful in link design because they provide a single parameter for evaluating device performance in a link that represents the combined effects of a device's optical and electrical parameters. Further, by using the same parameter for both direct and external modulation devices, we begin the process – which will carry on through much of the book – of using a single set of tools for evaluating both types of links.
The most common electro-optic devices in use for links today are the in-plane diode laser, both Fabry–Perot and DFB, for direct modulation, the Mach–Zehnder modulator for external modulation and a photodiode for photodetection. Thus on a first reading, one may want to focus on these devices.
The device slope efficiencies that we developed in Chapter 2, and that were cascaded to form links in Chapter 3, explicitly ignored any frequency dependence. In this chapter we remove that restriction. As we shall see, virtually all modulation and photodetection devices have an inherently broad bandwidth. Digital links require broad bandwidth, which is one of the reasons for the numerous applications of fiber optic links to digital systems. A few analog link applications also require the full device bandwidth. However, it is far more common for analog links to need only a portion of the devices' inherent bandwidth. Consequently most analog link designs include some form of RF pre- or post-filtering to reduce the bandwidth.
For completeness we address bandpass and broad bandwidth impedance matching for three electro-optic devices: PIN photodiode, diode laser and Mach–Zehnder modulator. We then combine the bandpass impedance matched cases to form both direct and external modulation links. However, the same analytical approach is used for both impedance matching methods and both modulation techniques. Therefore those readers desiring a less exhaustive treatment can obtain a complete introduction to the subject by studying only one of the impedance matching methods and one of the modulation techniques.
One may be tempted to ask: why bother with bandwidth reduction, since this adds components and complicates the design? There are at least two key reasons for implementing bandwidth reduction.