To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The interaction of light with nanoscale structures is at the core of nano-optics. As the structures become smaller and smaller the laws of quantum mechanics will become apparent. In this limit, the discrete nature of atomic states gives rise to resonant light-matter interactions. In atoms, molecules, and nanoparticles, such as semiconductor nanocrystals and other “quantum confined” systems, these resonances occur when the photon energy matches the energy difference of discrete internal (electronic) energy levels. Owing to the resonant character, light-matter interaction can often be approximated by treating these quantum emitters as effective two-level systems, i.e. by considering only those two (electronic) levels whose difference in energy is close to the interacting photon energy ħω0.
In this chapter we discuss quantum emitters that are used in optical experiments. We will discuss their use as single-photon sources and analyze their photon statistics. While the radiative properties of quantum emitters have been discussed in Chapter 8, this chapter focuses on the properties of the quantum emitters themselves. We adopt a rather practical perspective since more detailed accounts can be found elsewhere (see e.g. [1–4]).
Types of quantum emitters
The possibility of detecting single quantum emitters optically relies mostly on the fact that redshifted emission can be very efficiently discriminated against excitation light [5, 6]. This opens the road for experiments in which the properties of these emitters are studied or in which they are used as discrete light sources.
At the heart of nano-optics are light-matter interactions on the nanometer scale. For example, optically excited single molecules are used to probe local environments and metal nanostructures are exploited for extreme light localization and enhanced sensing. Furthermore, various nanoscale structures are used in near-field optics as local light sources.
The scope of this chapter is to discuss the interactions of light with nanoscale systems. The light-matter interaction depends on many parameters, such as the atomic composition of the materials, their geometry and size, and the frequency and intensity of the radiation field. Nevertheless, there are many issues that can be discussed from a more or less general point of view.
To rigorously understand light-matter interactions we need to invoke quantum electrodynamics (QED). There are many textbooks that provide a good understanding of optical interactions with atoms or molecules, and we especially recommend the books in Refs. [1–3]. Since nanometer-scale structures are often too complex to be solved rigorously by QED, one often needs to stick to classical theory and invoke the results of QED in a phenomenological way.
The multipole expansion
In this section we consider an arbitrary material system that is small compared with the wavelength of light. We call this material system a particle. Although it is small compared with the wavelength, this particle consists of many atoms or molecules. On a macroscopic scale the charge density ρ and current density j can be treated as continuous functions of position.
An optical antenna is a mesoscopic structure that enhances the local light-matter interaction. Similarly to their radiowave analogs, optical antennas mediate the information and energy transfer between the free radiation field and a localized receiver or transmitter. The degree of localization and the magnitude of transduced energy indicate how good an antenna is. We thus define an optical antenna as a device designed to efficiently convert freepropagating optical radiation to localized energy, and vice versa [1]. In this sense, even a standard lens is an antenna, but since the degree of localization is limited by diffraction, the lens is a poor antenna. To characterize the quality and the properties of an antenna, radio engineers have introduced antenna parameters, such as gain and directivity. Optical antennas hold promise for controllably enhancing the performance and efficiency of optoelectronic devices, such as photodetectors, light emitters, and sensors.
Although many of the properties and parameters of optical antennas are similar to those of their radiowave and microwave conuterparts, there are important differences resulting from their small size and the plasmon resonances of metal nanostructures. In this chapter we introduce the basic principles of optical antennas, building on the background of both radiowave antenna engineering and plasmonics.
Significance of optical antennas
The length scale of free radiation is determined by the wavelength λ, which is on the order of 500 nm. However, the characteristic size of the source generating this radiation is significantly smaller, typically sub-nanometer.
Light embraces the most fascinating spectrum of electromagnetic radiation. This is mainly due to the fact that the energy of light quanta (photons) lies within the energy range of electronic transitions in matter. This gives us the beauty of color and is the reason why our eyes adapted to sense the optical spectrum.
Light is also fascinating because it manifests itself in the forms of waves and particles. In no other range of the electromagnetic spectrum are we more confronted with the wave-particle duality than in the optical regime. While long wavelength radiation (radiofrequencies, microwaves) is well described by wave theory, short wavelength radiation (X-rays) exhibits mostly particle properties. The two worlds meet in the optical regime.
To describe optical radiation in nano-optics it is mostly sufficient to adopt the wave picture. This allows us to use classical field theory based on Maxwell's equations. Of course, in nano-optics the systems with which the light fields interact are small (single molecules, quantum dots), which necessitates a quantum description of the material properties. Thus, in most cases we can use the framework of semiclassical theory, which combines the classical picture of fields and the quantum picture of matter. However, occasionally, we have to go beyond the semiclassical description. For example the photons emitted by a quantum system can obey non-classical photon statistics in the form of photon-antibunching (no two photons arriving simultaneously).
A key problem in nano-optics is the determination of electromagnetic field distributions near nanoscale structures and the associated radiation properties. A solid theoretical understanding of field distributions holds promise for new, optimized designs of near-field optical devices, in particular by exploitation of field-enhancement effects and favorable detection schemes. Calculations of field distributions are also necessary for imagereconstruction purposes. Fields near nanoscale structures often have to be reconstructed from experimentally accessible far-field data. However, most commonly the inverse scattering problem cannot be solved in a unique way, and calculations of field distributions are needed in order to provide prior knowledge about source and scattering objects and to restrict the set of possible solutions.
Analytical solutions of Maxwell's equations provide a good theoretical understanding, but can be obtained for simple problems only. Other problems have to be strongly simpli-fied. A pure numerical analysis allows us to handle complex problems by discretization of space and time but computational requirements (usually given by CPU time and memory) limit the size of the problem and the accuracy of results is often unknown. The advantage of pure numerical methods, such as the finite-difference time-domain (FDTD) method and the finite-element (FE) method, is the ease of implementation. We do not review these pure numerical techniques since they are well documented in the literature. Instead we review two commonly used semi-analytical methods in nano-optics: themultiple-multipole method (MMP) and the volume-integral method.
The interaction of metals with electromagnetic radiation is largely dictated by their free conduction electrons. According to the Drude model, the free electrons oscillate 180° out of phase relative to the driving electric field. As a consequence, most metals possess a negative dielectric constant at optical frequencies, which causes, for example, a very high reflectivity. Furthermore, at optical frequencies the metal's free-electron gas can sustain surface and volume charge-density oscillations, called plasmons, with distinct resonance frequencies. The existence of plasmons is characteristic of the interaction of metal nanostructures with light at optical frequencies. Similar behavior cannot be simply reproduced in other spectral ranges using the scale invariance of Maxwell's equations since the material parameters change considerably with frequency. Specifically, this means that model experiments with, for instance, microwaves and correspondingly larger metal structures cannot replace experiments with metal nanostructures at optical frequencies.
The surface charge-density oscillations associated with surface plasmons at the interface between a metal and a dielectric can give rise to strongly enhanced optical near-fields, which are spatially confined near the metal surface. Similarly, if the electron gas is confined in three dimensions, as in the case of a small particle, the overall displacement of the electrons with respect to the positively charged lattice leads to a restoring force, which in turn gives rise to specific particle-plasmon resonances depending on the geometry of the particle. In particles of suitable (usually pointed) shape, localized charge accumulations that are accompanied by strongly enhanced optical fields can occur.
In the history of science, the first applications of optical microscopes and telescopes to investigate nature mark the beginnings of new eras. Galileo Galilei used a telescope to see for the first time craters and mountains on a celestial body, the Moon, and also discovered the four largest satellites of Jupiter. With this he opened the field of optical astronomy. Robert Hooke and Antony van Leeuwenhoek used early optical microscopes to observe certain features of plant tissue that were called “cells,” and to observe microscopic organisms, such as bacteria and protozoans, thus marking the beginning of optical biology. The newly developed instrumentation enabled the observation of fascinating phenomena not directly accessible to human senses. Naturally, the question of whether the observed structures not detectable within the range of normal vision should be accepted as reality at all was raised. Today, we have accepted that, in modern physics, scientific proofs are veri-fied by indirect measurements, and the underlying laws have often been established on the basis of indirect observations. It seems that as modern science progresses it withholds more and more findings from our natural senses. In this context, the use of optical instrumentation excels among ways to study nature. This is due to the fact that because of our ability to perceive electromagnetic waves at optical frequencies our brain is used to the interpretation of phenomena associated with light, even if the structures that are observed are magnified a thousandfold. This intuitive understanding is among the most important features that make light and optical processes so attractive as a means to reveal physical laws and relationships.
As early as 1619 Johannes Kepler suggested that the mechanical effect of light might be responsible for the deflection of the tails of comets entering our Solar System. The classical Maxwell theory showed in 1873 that the radiation field carries with it momentum and that “light pressure” is exerted on illuminated objects. In 1905 Einstein introduced the concept of the photon and showed that energy transfer between light and matter occurs in discrete quanta. Momentum and energy conservation was found to be of great importance in microscopic events. Discrete momentum transfer between photons (X-rays) and other particles (electrons) was experimentally demonstrated by Compton in 1925 and the recoil momentum transferred from photons to atoms was observed by Frisch in 1933 [1]. Important studies on the action of photons on neutral atoms were carried out in the 1970s by Letokhov and other researchers in the USSR and by Ashkin's group at the Bell Laboratories in the USA. The latter group proposed bending and focusing of atomic beams and trapping of atoms in focused laser beams. Later work by Ashkin and coworkers led to the development of “optical tweezers.” These devices allow optical trapping and manipulation of macroscopic particles and living cells with typical sizes in the range of 0.1–10μm [2, 3]. Milliwatts of laser power produce piconewtons of force. Owing to the high field gradients of evanescent waves, stronger forces are to be expected in optical near-fields.
Localization refers to the precision with which the position of an object can be defined. Spatial resolution, on the other hand, is a measure of the ability to distinguish two separated point-like objects from a single object. The diffraction limit implies that optical resolution is ultimately limited by the wavelength of light. Before the advent of near-field optics it was believed that the diffraction limit imposes a hard boundary and that physical laws strictly prohibit resolution significantly better than λ/2. It was then found that this limit is not as strict as assumed and that access to evanescent modes of the spatial spectrum offers a direct route to overcome the diffraction limit. However, further critical analysis of the diffraction limit revealed that “super-resolution” can also be obtained by pure far-field imaging under certain constraints. In this chapter we analyze the diffraction limit and discuss the principles of different imaging modes with resolutions near or beyond the diffraction limit.
The point-spread function
The point-spread function is a measure of the resolving power of an optical system. The narrower the point-spread function the better the resolution will be. As the name implies, the point-spread function defines the spread of a point source. If we have a radiating point source then the image of that source will appear to have a finite size. This broadening is a direct consequence of spatial filtering. A point in space is characterized by a delta function that has an infinite spectrum of spatial frequencies kx and ky.
The problem of dipole radiation in or near planar layered media is of significance to many fields of study. It is encountered in antenna theory, single-molecule spectroscopy, cavity quantum electrodynamics, integrated optics, circuit design (microstrips), and surface-contamination control. The relevant theory was also applied to explain the strongly enhanced Raman effect of adsorbed molecules on noble metal surfaces, and in surface science and electrochemistry for the study of optical properties of molecular systems adsorbed on solid surfaces. Detailed literature on the latter topic is given in Ref. [1]. In the context of nano-optics, dipoles close to a planar interface have been considered by various authors to simulate tiny light sources and small scattering particles [2]. The acoustic analog is also applied to a number of problems such as seismic investigations and ultrasonic detection of defects in materials [3].
In his original paper [4], in 1909, Sommerfeld developed a theory for a radiating dipole oriented vertically above a planar and lossy ground. He found two different asymptotic solutions: space waves (spherical waves) and surface waves. The latter had already been investigated by Zenneck [5]. Sommerfeld concluded that surface waves account for longdistance radio-wave transmission because of their slower radial decay along the Earth's surface compared with that of space waves. Later, when space waves were found to reflect at the ionosphere, the contrary was confirmed. Nevertheless, Sommerfeld's theory formed the basis for all subsequent investigations.
In this appendix we state the asymptotic far-field Green functions for a planarly layered medium. It is assumed that the source point r0 = (x0, y0, z0) is in the upper half-space (z > 0). The field is evaluated at a point r = (x, y, z) in the far-zone, i.e. r ≫ λ. The optical properties of the upper half-space and the lower half-space are characterized by ε1, μ1 and εn, μn, respectively. The planarly layered medium in between the two halfspaces is characterized by the generalized Fresnel reflection and transmission coefficients. We choose a coordinate system with origin on the topmost surface of the layered medium with the z-axis perpendicular to the interfaces. In this case, z0 denotes the height of the point source relative to the topmost layer. In the upper half-space, the asymptotic dyadic Green function is defined as
where p is the dipole moment of a dipole located at r0 and G0 and Gref are the primary and reflected parts of the Green function. In the lower half-space we define
with Gtr being the transmitted part of the Green function. The asymptotic Green functions can be derived by using the far-field forms of the angular spectrum representation.
The primary Green function in the far-zone is found to be
The reflected part of the Green function in the far-zone is
where the potentials are determined in terms of the generalized reflection coefficients of the layered structure as
The transmitted part of the Green function in the far-zone is
where δ denotes the overall thickness of the layered structure.
This book provides an elementary introduction to the subject of quantum optics, the study of the quantum mechanical nature of light and its interaction with matter. The presentation is almost entirely concerned with the quantized electromagnetic field. Topics covered include single-mode field quantization in a cavity, quantization of multimode fields, quantum phase, coherent states, quasi-probability distribution in phase space, atom-field interactions, the Jaynes-Cummings model, quantum coherence theory, beam splitters and interferometers, dissipative interactions, nonclassical field states with squeezing etc., 'Schrödinger cat' states, tests of local realism with entangled photons from down-conversion, experimental realizations of cavity quantum electrodynamics, trapped ions, decoherence, and some applications to quantum information processing, particularly quantum cryptography. The book contains many homework problems and an extensive bibliography. This text is designed for upper-level undergraduates taking courses in quantum optics who have already taken a course in quantum mechanics, and for first and second year graduate students.
Covering a number of important subjects in quantum optics, this textbook is an excellent introduction for advanced undergraduate and beginning graduate students, familiarizing readers with the basic concepts and formalism as well as the most recent advances. The first part of the textbook covers the semi-classical approach where matter is quantized, but light is not. It describes significant phenomena in quantum optics, including the principles of lasers. The second part is devoted to the full quantum description of light and its interaction with matter, covering topics such as spontaneous emission, and classical and non-classical states of light. An overview of photon entanglement and applications to quantum information is also given. In the third part, non-linear optics and laser cooling of atoms are presented, where using both approaches allows for a comprehensive description. Each chapter describes basic concepts in detail, and more specific concepts and phenomena are presented in 'complements'.
Holographic and speckle interferometry are optical techniques which use lasers to make non-contracting field view measurements at a sensitivity of the wavelength of light on optically rough (i.e. non-mirrored) surfaces. They may be used to measure static or dynamic displacements, the shape of objects, and refractive index variations of transparent media. As such, these techniques have been applied to the solution of a wide range of problems in strain and vibrational analysis, non-destructive testing (NDT), component inspection and design analysis and fluid flow visualisation. This book provides a self-contained, unified, theoretical analysis of the basic principles and associated opto-electronic techniques (for example Electronic Speckle Pattern Interferometry). In addition, a detailed discussion of experimental design and practical application to the solution of physical problems is presented. In this new edition, the authors have taken the opportunity to include a much more coherent description of more than twenty individual case studies that are representative of the main uses to which the techniques are put. The Bibliography has also been brought up to date.