To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It is often said that nanostructures have become the system of choice for studying transport over the past few years. What does this simple statement mean?
First, consider transport in large, macroscopic systems. Quite simply, for the past fourscore years, emphasis in studies of transport has been on the Boltzmann transport equation and its application to devices of one sort or another. The assumptions that are usually made for studies are the following: (i) scattering processes are local and occur at a single point in space; (ii) the scattering is instantaneous (local) in time; (iii) the scattering is very weak and the fields are low, such that these two quantities form separate perturbations on the equilibrium system; (iv) the time scale is such that only events that are slow compared to the mean free time between collisions are of interest. In short, one is dealing with structures in which the potentials vary slowly on both the spatial scale of the electron thermal wavelength (to be defined below) and the temporal scale of the scattering processes.
In contrast to the above situation, it has become possible in the last decade or so to make structures (and devices) in which characteristic dimensions are actually smaller than the appropriate mean free paths of interest. In GaAs/GaAlAs semiconductor heterostructures, it is possible at low temperature to reach mobilities of 106 cm2/Vs, which leads to a (mobility) mean free path on the order of 10 μm and an inelastic (or phase-breaking) mean free path even longer. (By “phase-breaking” we mean decay of the energy or phase of the “wave function” representing the carrier.)
The ultimate measure of acceptable image quality from a lens is customer or user satisfaction with the image that is produced. This measure can be the result of a quantitative evaluation, such as signal-to-noise ratio under certain conditions. In many cases, it may be a subjective evaluation as to the acceptable quality of the image as determined by the viewer. In any case, the goal for the lens designer needs to be expressed in some quantitative values that can be computed by the designer from the lens data. Only in this way can the designer know that the design task is completed.
The goal of lens design is to produce a system which will provide images of acceptable quality for a specified user. Image quality is frequently very subjective, based upon the opinion of a user as to whether the appearance of the image is pleasing or informative. In some applications the image quality can be determined in very objective ways, such as the level of contrast of certain fine details exceeding some specified threshold value. In either cases, there are physical quantities describing the image structure that can be used to evaluate the probable degree of acceptability of an image produced by a lens design.
“Image quality” is a somewhat elusive quantity. The quality of an image may be defined by its technical content or pictorial content. Quantifying the technical content or image structure is easier than attempting to quantify the pictorial content of an image.
In this book, the process of lens design has been dissected. The combination of art and science necessary to successfully carry out a design have been demonstrated. Each request for a design requires a combination of the skills that are brought to be resident in the mind of a skilled designer. The components for success in optical design are acquired through a combination of study and practice. The same is true of any acquired skill.
By now it is evident that optical design requires access to up-to-date computer programs in order to be competitive. It should also be evident that the computer program alone does not produce the design. The algorithms resident in the program are a consequence of the history and ingenuity that have taken place in the field. Each new design task requires a new path to be generated under the guidance of the designer using the computation tools. The successful designer does not just react to a specification provided by the customer, but is an active participant in developing the solution to the problem.
This book has been directed toward supplying a view into the process of design. The introduction to geometrical and physical optics, aberrations, and image evaluation defines the basis for optical design methods. The examples carried out on a number of types of lenses illustrate how the process of design is carried out using different computer design tools.
It is not essential that a designer understand all of the details of the process by which a design program carries out the optimization. Successful designers will, however, understand the principles and options that are available. Much time and effort has been expended by program developers to make the process as bulletproof and transparent to the user as possible. The past few years have seen an incredible improvement in the ability to control the modification of lenses by a program, and to explore new regions for solutions.
A basic comprehension of the important issues and procedures used is needed by any successful designer. This chapter provides enough insight to permit the designer to make the decisions necessary, but does not provide enough information to write design optimization programs. For detailed information the reader is referred to papers by Levenberg (1944), Wynne (1959), Rosen and Eldert (1954), Spencer (1963), and the summary by Kidger (1993). Discussion of newer techniques for optimization are found in papers by Kuper, Harris, and Hilbert (1994), Forbes and Jones (1993), and, of course, the various program manuals.
Optimization consists of adjusting the parameters of a lens to meet as closely as possible the requirements placed on the design. Current design programs have achieved a high degree of sophistication, and can rapidly search the design space for the closest approach to the design goals. The process of optimization requires the selection of a starting point and a set of variables.
No design task is complete until the tolerances have been evaluated. The lens design alone, while interesting to the designer, needs to be fabricated within some range of realistic tolerances in order to be of interest to the user. A full tolerancing of a lens may become a more difficult task than the original design of the lens. The specified tolerances must be sufficient to ensure that the image quality goals are likely to be met. The tolerances required for fabrication are the major drivers in determining the cost of actually building and assembling a lens.
Before proceeding to carry out tolerancing the designer must decide upon the allowable degradation in the image. Despite some claims to the contrary, no system will ever be built absolutely perfectly with no deviations from the specified parameters. Therefore the imagery produced by a real system will differ from that of a perfectly fabricated system. During the design stage the designer should have considered this problem and designed into the lens sufficient margin that some errors in the lens parameters can be allowed. The balance between design margin and allowable tolerance loss is frequently an important economic issue. It is also important to remember that some margin usually must be assigned to operational considerations, such as setting the focal position, or to environmental effects such as temperature changes.
The process of optical design is both an art and a science. There is no closed algorithm that creates a lens, nor is there any computer program that will create useful lens designs without general guidance from an optical designer. The mechanics of computation are available within a computer program, but the inspiration and guidance for a useful solution to a customer's problems come from the lens designer. A successful lens must be based upon technically sound principles. The most successful designs include a blend of techniques and technologies that best meet the goals of the customer. This final blending is guided by the judgment of the designer.
Let us start by looking at a lens design. Figure 1.1 shows the layout of a photographic type of lens, showing some of the ray paths through the lens. The object is located a long distance (100,000,000mm) to the left. This is what the computer program considers equivalent to an infinite object distance. The bundles of rays from each object point enter the lens as parallel bundles of rays. Each ray bundle passes through the lens and is focused toward an image point. On the lens shown, the field covered is 21° half width, which defines the size of the object that will be imaged by the lens.
The diameter of the bundles of light rays entering the lens determines the brightness of the image, and is established by the aperture stop of the lens.
The purpose of this book is to provide an introduction to the practice of lens design. As the title suggests, successful design will require the application of individual creativity as well as artful manipulation and thorough comprehension of the numeric tools available in lens design programs. The technology, user connection, and breadth of the commercial lens design programs have reached a very high level. The availability of inexpensive, high-speed personal computers and user-friendly operating systems has brought the computational tools within economic reach of any individual.
This book covers the basics of image formation, system layout, and image evaluation, and contains a number of examples of lens designs. There are several excellent books in existence that are principally a compilation of the results of the design of several types of lenses. In this book, it is my intention to describe the process rather than the results. The explanations of the basics are provided here in a practical manner and in a level of detail sufficient to provide an understanding of the principles. The selected examples of designs include a narrative of the thinking and approach toward the decisions that need to be made by the designer when carrying out the work. The principles are, of course, independent of the software used. Each example shown does provide the opportunity to exploit different avenues of approach to the design.
Several different lens design programs have been used to provide the majority of the illustrations in this book.
Aberrations are deviations from the perfect geometrical imaging case. An understanding of the influence and correction of aberrations obviously requires that somewhat more detail be developed.
Ideal image formation requires that the relation between object and image would follow paraxial rules, and all rays from each object point would pass through its paraxial conjugate image point, with all rays having the same optical path length from object to image. The quantitative measure of the aberration at any field location is a spread of ray intercepts on the image plane or an associated optical path difference error evaluated on the exit pupil. The paraxial point is usually taken as the image reference point, and the image errors with real rays measured with respect to this point. Perfect imagery also can be defined as zero wavefront error in which the exiting wavefront coincides with the exit pupil reference sphere. (Additional considerations relate to the distribution of rays in the pupil, as pupil aberrations may indicate a change of aberration with field position, even though the optical path errors are zero at some specified field location.)
The aberration at a given field point produces a distribution of rays about the image reference point, the symmetry of which is determined by the magnitude and combination of aberrations that are present in the lens. In the case of low-order aberrations, such as lateral displacement or a focus position error, the choice of a new reference position can negate the aberration.
We will now deal with the application of the concepts discussed in the previous chapters by applying these methods to the design of a number of lenses. These examples have been selected to provide a view of the techniques used in design at several greater levels of complexity. The majority of this chapter will deal with the detailed design of somewhat traditional, or basic, design problems, as they actually provide the basis for most of the optics done in this world. Some of the newer opportunities for design will be discussed in the latter portion of the chapter.
The goals set for the designs are somewhat arbitrary, but realistic, versions of those encountered in real life. Expanding beyond these goals will serve as a good learning experience for any individual who wishes to increase his or her knowledge of the methods of lens design.
Several different design programs will be used in these examples. A difficulty with this is that it may lead to some confusion as to the meaning of various program-dependent features that are necessary in working with a computer. The great advantage is that the essential nature of the optical problems will emerge. The design programs used, CODE V, OSLO, and ZEMAX, happen to be those used by the author for many years in teaching a course in lens design at the Optical Sciences Center.
Design and analysis of lens systems uses numerical calculations based upon geometrical optics. The calculation of the form of the image requires interpretation of these geometrical results by the use of physical optics. Closedform mathematical solutions are available only in a few relatively simple cases so that all realistic lens design is based upon manipulation of computed evaluations of the imagery produced by a lens.
The geometrical optical model is sufficient to define the properties of image formation by a lens and to relate them to the construction parameters of the lens. The geometrical ray-based model permits determination of image location and aberrations, and enables calculation of the pupil function describing the wavefront. Evaluation of the image quality requires the introduction of concepts of physical optics and wave propagation. The intensity distribution in the image is calculated by applying a diffraction integral to the pupil function.
The basic optics of image location and size is established by the application of paraxial or first-order optics. Ray tracing is the basic tool used in optical design. Aberrations are defined as the deviation of the ray path for real rays from the paraxial basis coordinates. Physical optics and beam propagation extend the geometrical model to include diffraction and interference. Investigation of these models permits determination of the limits on image formation.
Optical glasses are the most commonly used materials, although many other materials have become important in modern optical design.
We now turn to the problem of describing long-wavelength lattice vibrations in multilayered structures. At a microstructural level this problem is solved by using the techniques of lattice dynamics requiring intense numerical computation. This approach is inconvenient, if not impracticable, at the macroscopic level, where the kinetics and dynamics of large numbers of particles need to be described. In this theoretical regime it is necessary to obtain models that transcend those at the level of individual atoms in order to describe electron and hole scattering, and all the energy and momentum relaxation processes that underlie transport and optical properties of macroscopic structures. The basic problem is to make a bridge between the atomic crystal lattice and the classical continuum. We know from the theory of elasticity and the theory of acoustic waves, which hark back to the nineteenth century, that continuum theory works extremely well for long-wave acoustic waves. The case of optical vibrations is another matter. Here, the essence is one atom vibrating against another in a primitive unit cell and it is by no means obvious that a continuum approach can work in this case. This has been highlighted by controversy concerning the boundary conditions that long-wave optical vibrations obey at each interface of a multilayer structure. On the other hand, no controversy attaches to acoustic waves.
It cannot be thus long, the sides of nature will not sustain it.
Antony and Cleopatra, W. Shakespeare
Introduction
This chapter deals with several topics. There is considerable interest in fabricating quasi-2D structures in which the electron–phonon interaction is reduced. Optical-phonon engineering is in its infancy, but already there have been investigations of the effect of incorporating monolayers and conducting layers. One of the first quasi-2D systems to be studied was the thin ionic slab, yet there are still problems connected with the description of optical modes in such structures. The increasing sophistication of microfabrication techniques has led to the creation of quasi-one-dimensional (quantum wires) and quasi-zero-dimensional (quantum dots) structures that are expected to have interesting physical properties. It is important to establish the mode structure, both electron and vibrational, in these systems. In this chapter we consider some of these topics briefly.
Monolayers
The study of short-period superlattices in electronic and optical devices has received considerable attention and there are several reasons why this has been so. Ease of growth and reduction of interface roughness and residual impurities make for more perfect structures. Replacing random alloys, such as AlxGa1 – x As, with their ordered superlattice counterparts (GaAs)m/(AlAs)n eliminates alloy scattering. In the AlxGa1–x As system there is the added advantage of avoiding the troublesome DX center. The replacement of random alloys by equivalent superlattices in bandgap engineering is unproblematic.
An Argument against Abolishing Christianity, J. Swift
Charged-Impurity Scattering
Introduction
Scattering of electrons by charged impurity atoms dominates the mobility at low temperatures in bulk material and is usually very significant at room temperature (Fig. 9.1). The technique of modulation doping in high-electron-mobility field-effect transistors (HEMTs) alleviates the effect of charged-impurity scattering but by no means eliminates it. It remains an important source of momentum relaxation (but not of energy relaxation because the collisions are essentially elastic). Though its importance has been recognized for a very long time, obtaining a reliable theoretical description has proved to be extremely difficult.
There are many problems. First of all there is the problem of the infinite range of the Coulomb potential surrounding a charge, which implies that an electron is scattered by a charged impurity however remote, leading to an infinite scattering cross-section for vanishingly small scattering angles. Intuitively, we would expect distant interactions with a population of charged impurities to time-average to zero, leaving only the less frequent, close collisions to determine the effective scattering rate. This intuition motivated the treatment by Conwell and Weisskopf (1950) in which the range of the Coulomb potential was limited to a radius equal to half the mean distance apart of the impurities. Setting an arbitrary limit of this sort was avoided by introducing the effect of screening by the population of mobile electrons as was done by Brooks and Herring (1951) for semiconductors, following the earlier approach by Mott (1936).