To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In chapter 9 we derived the laws of paraxial optics by developing the angle eikonal of a lens into a Taylor series and keeping the quadratic terms only. We now investigate the result of taking along the fourth degree terms as well. We base our treatment on T. Smith's celebrated 1921/2 paper ‘The changes in aberrations when the object and stop are moved’ [43]. Other approaches may be found in, for example, Herzberger [7], [8], Buchdahl [11], [12], and Luneburg [27].
A word about the notation. So far it has been shown explicitly how the refractive indices of the object space and the image space enter into the formulas. In this chapter we take a different approach. We assume that all distances in the object space, transverse as well as axial, are multiplied by the object space refractive index, and similarly that all distances in the image space are multiplied by the image space refractive index. In other words: lengths are expressed in units that are a constant multiple of the local wavelength. This is almost always a useful notation; the only exception (and the reason that we have not adhered to it throughout this book) is the paraxial calculations discussed in chapter 10. It is useful to introduce a reduced magnification G as well.
In this chapter we begin to forge a connection between the ray theory and the wave theory of light, two topics that so far have been treated as entirely separate and disconnected. Fresnel showed in the early nineteenth century how the idea of straight line propagation can be reconciled with the wave theory by using what are now called Fresnel zones. His reasoning went as follows. A source point S radiates a spherical wavefront towards a circular aperture, as shown in fig. 16.1. To find the amplitude of the light wave at a point P beyond the aperture, each point in the part of the wavefront not stopped by the screen may be considered as a secondary source. The amplitude at P is the sum of the amplitudes contributed by each of these secondary sources. In this summation the relative phase of the contributions plays a crucial role.
To get a handle on the summation, Fresnel divided the wavefront into annular zones. These zones are bounded by circles, chosen such that successive distances SQ1P, SQ2P, SQ3P… differ by half a wavelength. It is not difficult to show that the areas of the zones so constructed are very nearly equal. So the waves arriving at P coming from two adjacent zones have the same amplitude, but a phase difference of 180° on account of the way in which the zones were constructed. The contributions from adjacent zones therefore cancel each other.
In the previous chapters (sections 1.3, 6.6, 7.6) we have seen several cases of the image quality becoming progressively worse as the angles between the rays and the axis of the lens are increased. In this chapter we assume that the angles between the rays and the axis are so small that the images formed are essentially perfect. The resulting approximate theory of lenses is called the paraxial approximation, or Gaussian optics. We use in this chapter an abstract method based on eikonal function theory. In the next chapter we use a more down to earth approach, which links the paraxial properties of a lens to its radii, thicknesses, and refractive indices. Later on, when we deal with the problem of wave propagation through lenses, it will become clear why we need both these approaches.
The discussion will be restricted to lenses with axial symmetry around the z-axis. This restricts the possible forms of the eikonal functions, as we now demonstrate for the angle eikonal. As a first step we replace the four variables L, M, L′, and M′ by four angles. In the object space we use the slope angle ψ between the ray and the z-axis, and the azimuth angle φ between the x-axis and the projection of the ray onto the (x, y) plane. In the image space we use similar variables ψ′ and φ′.
Geometrical optics is based on the concept that light travels along rays. Rays are lines in space that satisfy Fermat's principle, which states that light travels from one point to another along a path for which the travel time is stationary with respect to small variations in the shape of the path. The theory of geometrical optics can be founded on other ideas as well; for instance, Bruns based his classic paper [5] on the law of Malus and Dupin, which states that a fan of rays jointly perpendicular to a surface in the object space emerges from the lens with again all its rays jointly perpendicular to a surface. We could also simply start with Snell's law. All these starting points are logically equivalent; but broad insights into the properties and limitations of lenses can be obtained most easily by starting with Fermat's principle. One conclusion we can draw immediately: in a homogeneous medium light travels along straight lines.
The speed of light depends on the medium traversed. The ratio of the speed in vacuum and the speed in a medium is called the refractive index, usually denoted by the symbol n. In an inhomogeneous medium the speed, and therefore the refractive index, varies from point to point. In an anisotropic medium the speed depends on the direction of propagation, which makes the specification of the refractive index rather more complicated. Except for propagation in vacuum, the speed of light always depends on the wavelength.
It has been stressed in previous chapters that perfection at more than one magnification is impossible. Even so, lenses are often used for a variety of object and image distances. Camera lenses as well as enlarger lenses need to form sharp images over a wide range of conjugates. Even high power microscope objectives, notoriously sensitive to variations in object distance, are occasionally pressed into use for three-dimensional imaging. How can we deal with this paradox?
The explanation is that images need not be perfect. All we need is images that are sharp enough to utilize fully the finite resolution of the recording medium. Photographic film is limited by the size of the grain; CCDs are limited by the finite gate size; the retina of the eye is limited by the size of the rods and cones; etc.
For an analysis of incompatible lens requirements it is convenient to describe a lens by one of its eikonal functions. This provides all the information needed to calculate its aberrations at any magnification. The calculations are straightforward, at least in principle: choose a set of rays originating in a specified object point, determine their continuation in the image space, and see where they intersect the image plane.
Unfortunately these calculations can hardly ever be carried out in closed form. The central problem is to calculate x′, y′, L′, and M′ when x, y, L, and M are specified.
Aberrations are deviations from perfect image formation. Ideally all the rays that come from any point in the object space should intersect at a single point in the image space. This is, however, not a realistic design goal, as was already intimated in chapter 1, and discussed in more detail in chapter 6. According to Maxwell's theorem, proved in section 6.5, it is incompatible with Fermat's principle for a lens with a finite power to form a perfectly sharp image of more than one object plane. We shall therefore call a lens ‘perfect’ if it forms a perfectly sharp image not of the entire object space, but of a single object surface, plane or curved. The image may be plane or curved as well.
The statement ‘it is incompatible with Fermat's principle that…’ leaves it unclear how the truth or falsity of the statement should be proved. It is preferable to use the assertion ‘no eikonal can be constructed so that …’. As an example we give a second proof of Maxwell's theorem. Take a lens with axial symmetry, and describe it by the angle eikonal W(L, M, L′, M′) from front focal plane to back focal plane. For a magnification β′ the object distance z, measured from the front focal plane, is n/β′A, in which A is the power of the lens. The image distance z′, measured from the back focal plane, is —n′β′/A.
The theory of third order aberrations is based on the assumption that the aperture and the field of the lens are small enough to neglect terms of degree higher than four in the power series development of the eikonal. In practice this condition is rarely met, and yet the third order theory is quite important in the practice of lens design. The reason is that more often than not small changes in the construction parameters have a much greater effect on the third order aberrations than on the aberrations associated with the higher order terms in the eikonal. As a result it is often a useful design strategy first to make the third order aberrations zero, then to evaluate the magnitude of the higher order aberrations and to reduce them as far as possible, and finally to make minor changes in the construction parameters to introduce small amounts of third order aberration that compensate, as far as possible, for the higher order aberrations that are impervious to all attempts at correction. A numerical recipe to calculate the third order aberrations from the construction parameters is shown in appendix 2; practical routines for the fifth order aberrations may be found in [12]. The total aberrations of a lens are usually calculated by ray tracing, to be discussed in the next chapter. Several commercial computer codes are available to carry out all these calculations.
For certain lens types the series development used so far is wholly inappropriate.
In previous Chapters, transfer of energy to and from a permanent magnet, and associated changes in energy of the external field have been discussed. For initial magnetization, the objective is to apply sufficient energy to the material to align its internal magnetization vectors Min a unique direction, for which the magnet is said to saturate at Msat. Optimum performance is achieved along this preferred axis, but the characteristics quantifying the material's properties have to be measured outside the magnet. The internal parameter M cannot in fact be measured directly, and although the intrinsic curve is the more fundamental characteristic of a magnet, it must be deduced from an external measurement of the normal B versus H loop.
This example illustrates a problem, which is commonly encountered by users of permanent magnets. Whereas they design a device to operate in a certain manner, it later appears that the real magnet does not meet these expectations. Later in this Chapter, we discuss various techniques that are used for measuring magnetic parameters - their application to the properties of magnets themselves will provide the basis for quality control. Unfortunately, the magnitude of applied field that is required to saturate a particular magnet is a somewhat empirical quantity.
Permanent magnets have been employed in a wide range of electrical apparatus for a great many years, and it is well beyond the scope of this text to discuss their design for all current applications. However, the dramatic improvements in material properties that accompanied the evolution of rare earth magnets have focussed interest on certain electromechanical and electronic devices, in which these materials may be applied to advantage. With this in mind, we discuss a wide variety of applications, which reflect the scope of new design activity today. This includes devices whose extremely high production quantities continue to make low cost ceramic ferrite the dominant material in today's market, and high added value products, which exhibit significant performance benefits using high energy rare earth magnets.
The most important application for permanent magnet materials is in direct current (d.c.) rotating electric motors. Ceramic ferrites have long been used in these machines to provide a steady magnetic field from their stators, but more recently rare earth magnets have been employed to particular advantage to promote the evolution of electronically commutated brushless d.c. motors, in which the permanent magnet assembly usually becomes the rotating component. The high energy of rare earth magnets is often used to produce a greater air gap flux density in a d.c. motor, which yields a corresponding improvement in the motor's output torque. The magnet's high coercivity is also attractive, because this improves its resistance to demagnetization from the motor's own armature winding.
The original inspiration to write this book came when, after an electrical engineering training in the late 1960s, I embarked upon the design of a variety of permanent magnet electrical machines. I needed to know more about the behavior and performance of the different magnet materials than the electromechanical design texts provided, and significantly more applications data than the scientific books on magnetism contained. This shortcoming was exacerbated in the early 1970s when an entirely new class of magnet - the rare earths - was discovered, offering a vast array of new opportunities for permanent magnet devices, and new challenges to designers such as myself. As these new materials were developed, their properties exhibited dramatic improvements from year to year, reaching maturity in the early 1990s as a full range of samarium-cobalt and neodymium-iron-boron magnets. Until this had happened, I felt that any attempt to produce a comprehensive text including a description of these materials would have been premature. Now, with first-hand experience in most cases, I am able to describe their selection and design for a wide range of important applications.
The material for this book has evolved from courses given to students and practicing engineers while I was at the University of Cambridge and the University of Southern California, and from a variety of assignments to develop and design permanent magnet materials.
When a designer specifies the use of a permanent magnet, he certainly hopes that its magnetization will indeed remain permanent, or at least a close approximation to this. Specifically, the designer requires the magnet's demagnetization curve, the second quadrant of the B versus H characteristic, to remain unchanged under normal operating conditions. Unfortunately, this is never the case, so it is important to understand the nature of the changes that may occur, so that any degradation of the magnetic properties reflected in the demagnetization curve may be accounted for in the design. Changes in a magnet after it has been manufactured and fully magnetized may be caused by any combination of external influences, such as temperature, pressure and applied field. These changes fall into three categories.
The first category comprises those effects that result in a permanent change in the demagnetization curve, which persist even if the magnet is fully remagnetized. One should either avoid selecting a particular magnet type for an environment in which it will be exposed to conditions known to cause a permanent change, or provide protection for the magnet from this environment. Consider the case of alnico magnets, which, as described in Chapter 2, undergo a critical segregation of the ctl and a2 phases during their heat treatment between 550 and 650 ºC.