To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
As early as 1619 Johannes Kepler suggested that the mechanical effect of light might be responsible for the deflection of the tails of comets entering our Solar System. The classical Maxwell theory showed in 1873 that the radiation field carries with it momentum and that “light pressure” is exerted on illuminated objects. In 1905 Einstein introduced the concept of the photon and showed that energy transfer between light and matter occurs in discrete quanta. Momentum and energy conservation was found to be of great importance in microscopic events. Discrete momentum transfer between photons (X-rays) and other particles (electrons) was experimentally demonstrated by Compton in 1925 and the recoil momentum transferred from photons to atoms was observed by Frisch in 1933. Important studies on the action of photons on neutral atoms were made in the 1970s by Letokhov and other researchers in the former USSR and in the group of Ashkin at the Bell Laboratories, USA. The latter group proposed bending and focusing of atomic beams and trapping of atoms in focused laser beams. Later work by Ashkin and coworkers led to the development of “optical tweezers”. These devices allow optical trapping and manipulation of macroscopic particles and living cells with typical sizes in the range of 0.1–10 micrometers. Milliwatts of laser power produce piconewtons of force. Due to the high field gradients of evanescent waves, strong forces are to be expected in optical near-fields.
In near-field optical micro-copy, a local probe has to be brought into close proximity to the sample surface. Typically, the probe–sample distance is required to be smaller than the size of lateral field confinement and thus smaller than the spatial resolution to be achieved. As in other types of scanning probe techniques, an active feedback loop is required to maintain a constant distance during the scanning process. However, the successful implementation of a feedback loop requires a sufficiently short-ranged interaction between optical probe and sample. The dependence of this interaction on probe–sample distance should be monotonous in order to ensure a unique distance assignment. A typical block-diagram of a feedback loop applied to scanning probe microscopy is shown in Fig. 7.1. A piezoelectric element P(ω) is used to transform an electric signal into a displacement, whilst the interaction measurement I(ω) takes care of the reverse transformation. The controller G(ω) is used to optimize the speed of the feedback loop and to ensure stability according to well-established design rules. Most commonly, a so-called PI controller is used, which is a combination of a proportional gain (P) and an integrator stage (I).
Using the (near-field) optical signal itself as a distance-dependent feedback signal seems to be an attractive solution at first glance. However, it turns out that: (1) In the presence of a sample of unknown and inhomogeneous composition, unpredictable variations in the near-field distribution give rise to non-monotonous distance dependence.
Having discussed the propagation and focusing of optical fields, we now start to browse through the most important experimental and technical configurations employed in high-resolution optical microscopy. Various topics discussed in the previous chapters will be revisited from an experimental perspective. We shall describe both far-field and near-field techniques. Far-field microscopy, scanning confocal optical microscopy in particular, is discussed because the size of the focal spot routinely reaches the diffraction limit. Many of the experimental concepts that are used in confocal microscopy have naturally been transferred to near-field optical microscopy. In a near-field optical microscope a nanoscale optical probe is raster scanned across a surface much as in AFM or STM. There is a variety of possible experimental realizations in scanning near-field optical microscopy while in AFM and STM a (more or less) unique set-up exists. The main difference between AFM/STM and near-field optical microscopy is that in the latter an optical near-field has to be created at the sample or at the probe apex before any interaction can be ineasured. Depending how the near-field is measured, one distinguishes between different configurations. These are summarized in Table 5.1.
Far-field illumination and detection
Confocal microscopy
Confocal microscopy employs far-field illumination and far-field detection and has been discussed previously in Section 4.3. Despite the limited bandwidth of spatial frequencies imposed by far-field illumination and detection, confocal microscopy is successfully employed for high-position-accuracy measurements as discussed in Section 4.5 and for high-resolution imaging by exploiting nonlinear or saturation effects as discussed in Section 4.2.3.
Position accuracy refers to the precision with which an object can be localized in space. Spatial resolution, on the other hand, is a measure of the ability to distinguish two separated point-like objects from a single object. The diffraction limit implies that optical resolution is ultimately limited by the wavelength of light. Before the advent of near-field optics it was believed that the diffraction limit imposes a hard boundary and that physical laws strictly prohibit resolution significantly better than λ/2. It was found that this limit is not as strict as assumed and that various tricks allow us to access the evanescent modes of the spatial spectrum. In this chapter we analyze the diffraction limit and discuss the principles of different imaging modes with resolutions near or beyond the diffraction limit.
The point-spread function
The point-spread function is a measure of the resolving power of an optical system. The narrower the point-spread function the better the resolution will be. As the name implies, the point-spread function defines the spread of a point source. If we have a radiating point source then the image of that source will appear to have a finite size. This broadening is a direct consequence of spatial filtering. A point in space is characterized by a delta function that has an infinite spectrum of spatial frequencies kx, ky. On propagation from the source to the image, high-frequency components are filtered out.
In the history of science, the first applications of optical microscopes and telescopes to investigate nature mark the beginning of new eras. Galileo Galilei used a telescope to see for the first time craters and mountains on a celestial body, the Moon, and also discovered the four largest satellites of Jupiter. With this he opened the field of astronomy. Robert Hooke and Antony van Leeuwenhoek used early optical microscopes to observe certain features of plant tissue that were called “cells”, and to observe microscopic organisms, such as bacteria and protozoans, thus marking the beginning of biology. The newly developed instrumentation enabled the observation of fascinating phenomena not directly accessible to human senses. Naturally, the question was raised whether the observed structures not detectable within the range of normal vision should be accepted as reality at all. Today, we have accepted that, in modern physics, scientific proofs are verified by indirect measurements, and that the underlying laws have often been established on the basis of indirect observations. It seems that as modern science progresses it withholds more and more findings from our natural senses. In this context, the use of optical instrumentation excels among ways to study nature. This is due to the fact that because of our ability to perceive electromagnetic waves at optical frequencies our brain is used to the interpretation of phenomena associated with light, even if the structures that are observed are magnified thousandfold.
It is believed that the basic physical building blocks forming the world we live in may be categorized into particles of matter and carriers of force between matter. All known elementary constituents of matter and transmitters of force are quantized. For example, energy, momentum, and angular momentum take on discrete quantized values. The electron is an example of an elementary particle of matter, and the photon is an example of a transmitter of force. Neutrons, protons, and atoms are composite particles made up of elementary particles of matter and transmitters of force. These composite particles are also quantized. Because classical mechanics is unable to explain quantization, we must learn quantum mechanics in order to understand the microscopic properties of atoms – which, for example, make up solids such as crystalline semiconductors.
Historically, the laws of quantum mechanics have been established by experiment. The most important early experiments involved light. Long before it was realized that light waves are quantized into particles called photons, key experiments on the wave properties of light were performed. For example, it was established that the color of visible light is associated with different wavelengths of light. Table 2.1 shows the range of wavelengths corresponding to different colors.
The connection between optical and electrical phenomena was established by Maxwell in 1864. This extended the concept of light to include the complete electromagnetic spectrum. A great deal of effort was, and continues to be, spent gathering information on the behavior of light.
The interaction of light with nanometer-sized structures is at the core of nano-optics. It is obvious that as the particles become smaller and smaller the laws of quantum mechanics will become apparent in their interaction with light. In this limit, continuous scattering and absorption of light will be supplemented or replaced by resonant interactions if the photon energy hits the energy difference of discrete internal (electronic) energy levels. In atoms, molecules and nanoparticles, like semiconductor nanocrystals and other “quantum confined” systems, these resonances are found at optical frequencies. Due to the resonant character, the light–matter interaction can often be approximated by treating the quantum system as an effective two-level system, i.e. by considering only those two (electronic) levels whose difference in energy is close to the interacting photon energy ħω0.
In this chapter we consider single-quantum systems that are fixed in space, either by deposition to a surface or by being embedded into a solid matrix. The material to be covered should familiarize the reader with single-photon emitters and with concepts developed in the field of quantum optics. While various theoretical aspects related to the fields emitted by a quantum system have been discussed in Chapter 8, the current chapter focuses more on the nature of the quantum system itself. We adopt a rather practical perspective since more rigorous accounts can be found elsewhere (see e.g. [1–4]).
Light embraces the most fascinating spectrum of electromagnetic radiation. This is mainly due to the fact that the energy of light quanta (photons) lies in the energy range of electronic transitions in matter. This gives us the beauty of color and is the reason why our eyes adapted to sense the optical spectrum.
Light is also fascinating because it manifests itself in the forms of waves and particles. In no other range of the electromagnetic spectrum are we more confronted with the wave–particle duality than in the optical regime. While long wavelength radiation (radiofrequencies, microwaves) is well described by wave theory, short wavelength radiation (X-rays) exhibits mostly particle properties. The two worlds meet in the optical regime.
To describe optical radiation in nano-optics it is mostly sufficient to adopt the wave picture. This allows us to use classical field theory based on Maxwell's equations. Of course, in nano-optics the systems with which the light fields interact are small (single molecules, quantum dots), which necessitates a quantum description of the material properties. Thus, in most cases we can use the framework of semiclassical theory, which combines the classical picture of fields and the quantum picture of matter. However, occasionally, we have to go beyond the semiclassical description. For example the photons emitted by a quantum system can obey non-classical photon statistics in the form of photon-antibunching (no two photons arriving simultaneously).
Engineers who design transistors, lasers and other semiconductor components want to understand and control the cause of resistance to current flow so that they may better optimize device performance. A detailed microscopic understanding of electron motion from one part of a semiconductor to another requires the explicit calculation of electron scattering probability. One would like to know how to predict electron scattering from one state to another. In this chapter we will see how to do this using powerful quantummechanical techniques.
In addition to understanding electron motion in a semiconductor we also want to understand how to make devices that emit or absorb light. In Chapter 6 it was shown that a superposition of two harmonic oscillator eigenstates could give rise to dipole radiation and emission of a photon. The creation of a photon was only possible if a superposition state existed between a correct pair of eigenstates. This leads directly to the concept of rules determining pairs of eigenstates which can give rise to photon emission. Such selection rules are a useful tool to help us understand the emission and absorption of light by matter. However, the real challenge is to use what we know to make practical devices which operate using emission and absorption of photons. This usually requires imposing some control over atomic-scale physical processes which, of course, can only be understood using quantum mechanics.
The interaction of metals with electromagnetic radiation is largely dictated by the free conduction electrons in the metal. According to the simple Drude model, the free electrons oscillate 180° out of phase relative to the driving electric field. As a consequence, most metals possess a negative dielectric constant at optical frequencies which causes, for example, a very high reflectivity. Furthermore, at optical frequencies the metal's free electron gas can sustain surface and volume charge density oscillations, called plasmon polaritons or plasmons with distinct resonance frequencies. The existence of plasmons is characteristic of the interaction of metal nanostructures with light. Similar behavior cannot be simply reproduced in other spectral ranges using the scale invariance of Maxwell's equations since the material parameters change considerably with frequency. Specifically, this means that model experiments with, for instance, microwaves and correspondingly larger metal structures cannot replace experiments with metal nanostructures at optical frequencies. The surface charge density oscillations associated with surface plasmons at the interface between a metal and a dielectric can give rise to strongly enhanced optical near-fields which are spatially confined near the metal surface. Similarly, if the electron gas is confined in three dimensions, as in the case of a small subwavelength-scale particle, the overall displacement of the electrons with respect to the positively charged lattice leads to a restoring force, which in turn gives rise to specific particle-plasmon resonances depending on the geometry of the particle.
The hydrogen atom is the Rosetta stone of the early twentieth century atomic physics. The attempt to decipher its structure and properties led to the development of quantum mechanics and the unraveling of many of the mysteries of atomic, molecular, and solid state physics, and a good deal of chemistry and modern biology. Unlike the various one-dimensional model problems that we have been studying in the previous chapters, the hydrogen atom is a real physical system in three dimensions. It consists of an electron moving in a spherically symmetric potential well due to the Coulomb attraction of the positively charged nucleus. In three dimensions, the electron is not constrained to move linearly. It can execute orbital motions and, thus, has angular momentum. Not only is the total energy of the electron in the atom quantized, its angular momentum also has interesting and unexpected quantized properties that cannot possibly be understood on the basis of classical mechanics and electrodynamics. They are, however, the natural and necessary consequences of the basic postulates of quantum mechanics, as will be shown in this chapter.
According to classical mechanics and electrodynamics, it is not possible to have a stable structure consisting of a small positively charged nucleus at the center of an electrically neutral atom with an electron sitting in its vicinity. For the electron not to be attracted into the positive charge, it must be orbiting around the nucleus so that the centrifugal force will counter the Coulomb attraction of the nucleus and maintain a constant electron orbit.
Quantum mechanics has evolved from a subject of study in pure physics to one with a vast range of applications in many diverse fields. Some of its most important applications are in modern solid state electronics and optics. As such, it is now a part of the required undergraduate curriculum of more and more electrical engineering, materials science, and applied physics schools. This book is based on the lecture notes that I have developed over the years teaching introductory quantum mechanics to students at the senior/first year graduate school level whose interest is primarily in applications in solid state electronics and modern optics.
There are many excellent introductory text books on quantum mechanics for students majoring in physics or chemistry that emphasize atomic and nuclear physics for the former and molecular and chemical physics for the latter. Often, the approach is to begin from a historic perspective, recounting some of the experimental observations that could not be explained on the basis of the principles of classical mechanics and electrodynamics, followed by descriptions of various early attempts at developing a set of new principles that could explain these ‘anomalies.’ It is a good way to show the students the historical thinking that led to the discovery and formulation of the basic principles of quantum mechanics. This might have been a reasonable approach in the first half of the twentieth century when it was an interesting story to be told and people still needed to be convinced of its validity and utility.