To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
All media in use today for laser-assisted recording rely on the associated rise in temperature to achieve a local change in some physical property of the material. Therefore the first step in analyzing the write and erase processes is the computation of temperature profiles. Except in very simple situations, these calculations are done numerically using either the finite difference method or the finite element technique. Generally speaking, the two methods produce similar results in comparable CPU times. Most often one assumes a flat surface for the disk and circular symmetry for the beam, which then allows one to proceed with solving the heat diffusion equation in two spatial dimensions (r and z in cylindrical coordinates). For more realistic calculations when the disk is grooved and Preformatted, or when the beam has asymmetry due to aberrations or otherwise, the heat absorption and diffusion equations must be solved in three-dimensional space.
A problem with the existing thermal models is that the optical and thermal parameters of the media are assumed to be independent of the local temperature. In practice, as the temperature changes, these parameters vary (some appreciably). While it is rather straightforward to incorporate such variations in the numerical models, at the present time it is difficult to obtain reliable data on the values of thermal parameters at a fixed temperature, let alone their temperature dependences. It is hoped that some effort in the future will be directed towards the accurate thermal characterization of thin-film media.
In this chapter, as a first application of the mathematical results derived in Chapter 3, we describe the far-field pattern obtained when a Gaussian beam is reflected from (or transmitted through) a surface with a sharp discontinuity in its reflection (or transmission) function. A good example of situations in which such phenomena occur is the knife-edge method of focus-error detection in optical disk systems; here a knife-edge partially blocks the column of light, allowing a split detector in the far field to sense the sign of the beam's curvature. Another example is the diffraction of the focused spot from the sharp edge of a groove. The far-field pattern in this case is used to derive the track-error signal, which drives the actuator responsible for track-following. Readback of the embossed pattern of information on the disk surface (e.g., data marks on CD and CD-ROM, Preformat marks on WORM and magneto-optical media) also involves diffraction from the edges of small bumps and/or pits.
This chapter begins with a general description of the problem and a derivation of all relevant formulas in section 4.1. In subsequent sections several specific cases of the general problem are treated; the emphasis will be on those instances where diffraction from a sharp edge finds application in optical-disk data storage systems.
The optical theory of laser disk recording and readout as well as that of the various focus-error and track-error detection schemes have been developed over the past several years. Generally speaking, these theories describe the propagation of the laser beam in the optical head, its interaction with the storage medium, and its return to the photodetectors for final analysis and signal extraction. The work in this area has been based primarily on geometrical optics and the scalar theory of diffraction, an approach that has proven successful in describing a wide variety of observed phenomena.
The lasers for future generations of optical disk drives are expected to operate at shorter wavelengths; at the same time, the numerical aperture of the objective lens is likely to increase and the track-pitch will decrease. These developments will result in very small focused spots and shallow depths of focus. However, under these conditions the interaction between the light and the disk surface roughness increases, polarization-dependent effects (especially those for the marginal rays) gain significance, maintaining tight focus and accurate track position becomes exceedingly difficult, and, finally, small aberrations and/or misalignments deteriorate the quality of the readout and servo signals. Thus, attaining acceptable levels of performance and reliability in practice requires a thorough understanding of the details of operation of the system. In this respect, accurate modeling of the optical path is indispensable.
In Chapter 12 we described the mean-field theory of magnetization for ferromagnetic materials, which consist of only one type of magnetic species. The atoms (or ions) comprising a simple ferromagnet are coupled via exchange interactions to their neighbors, and the sign of the exchange integral ℐ is positive everywhere. In this chapter we develop the mean-field model of magnetization for amorphous ferrimagnetic materials, of which the rare earth–transition metal (RE–TM) alloys are the media of choice for thermomagnetic recording applications. Ferrimagnets are composed of at least two types of magnetic species; while the exchange integral for some pairs of ions is positive, there are other pairs for which the integral is negative. This leads to the formation of two or more subnetworks of magnetic species. When the ferrimagnet has a uniform temperature, each of its subnetworks will be uniformly magnetized, but the direction of magnetization will vary among the subnetworks. In simple ferrimagnets consisting of only two subnetworks, the magnetization directions of the two are antiparallel.
In thin-film form, amorphous RE–TM alloys exhibit perpendicular magnetic anisotropy, which makes them particularly useful for polar Kerr (or Faraday) effect readout. Being ferrimagnetic, they possess a compensation point temperature, Tcomp, at which the net moment of the material becomes zero. Tcomp can be brought to the vicinity of the ambient temperature by proper choice of composition. This feature preserves uniform magnetic alignment in the perpendicular direction by preventing the magnetization from breaking up into domains.
It has been suggested that a focused beam in the neighborhood of the focal point behaves just like a plane wave. This misconception is based on the fact that the focused beam at its waist has a flat wave-front. The finite size of the beam, however, causes the plane-wave analogy to fail: Fourier analysis shows that a focused light spot is the superposition of a multitude of plane waves with varying directions of propagation. At the high values of numerical aperture typically used in optical recording, the fraction of light having a sizeable obliquity factor is simply too large to be ignored. The reflection coefficients and magneto-optical conversion factors of optical disks are rather strong functions of the angle of incidence; thus the presence of oblique rays within the angular spectrum of the focused light must influence the outcome of the readout process. The goal of the present chapter is to investigate the consequences of sharp focus in optical disk systems, and to clarify the extent of departure of the readout signals from those predicted in the preceding chapter.
The vector diffraction theory of Chapter 3 and the methods of Chapter 5 for computing multilayer reflection coefficients are used here to analyze the effects of high-NA focusing. The focused incident beam is decomposed into its spectrum of plane waves, and the reflected beam is obtained by the superposition of these plane waves after they are independently reflected from the multilayer surface.
In magneto-optical (MO) data storage systems the readout of recorded information is achieved by means of a focused beam of polarized light. Conventional systems today utilize a linearly polarized beam, whereas some suggested alternative methods rely instead on circular or elliptical polarization for readout.
Although the Jones calculus is the standard vehicle for analyzing the polarization properties of optical systems, we believe the alternative approach used in this chapter provides a better, more intuitive explanation for the operation of the readout system. The chapter begins by introducing the two basic states of circular polarization: right (RCP) and left (LCP). It then proceeds to show that the other two polarization states, linear (LP) and elliptical (EP), may be constructed by superposition of the two circular states. The sections that follow describe the actions of a quarter-wave plate (QWP) and a polarizing beam-splitter (PBS) on the various states of polarization. These two optical elements are of primary importance in detection schemes aimed at probing the state of polarization of a beam.
The classical scheme of differential detection for MO readout will be analyzed in section 6.4, first in its simple form for detecting the polarization rotation angle and then, with the addition of phase-compensating elements, for detecting ellipticity as well. In section 6.5 we describe an extension of the differential detection scheme. This extended scheme is used for spectral characterization (i.e., measurement of the wavelength-dependence of the Kerr angle and the ellipticity) of MO media.
Since the early 1940s, magnetic recording has been the mainstay of electronic information storage worldwide. Audio tapes provided the first major application for the storage of information on magnetic media. Magnetic tape has been used extensively in consumer products such as audio tapes and video cassette recorders (VCRs); it has also found application in the backup or archival storage of computer files, satellite images, medical records, etc. Large volumetric capacity and low cost are the hallmarks of tape data storage, although sequential access to the recorded information is perhaps the main drawback of this technology. Magnetic hard-disk drives have been used as mass-storage devices in the computer industry ever since their inception in 1957. With an areal density that has doubled roughly every other year, hard disks have been and remain the medium of choice for secondary storage in computers. Another magnetic data storage device, the floppy disk, has been successful in areas where compactness, removability, and fairly rapid access to recorded information have been of prime concern. In addition to providing backup and safe storage, the inexpensive floppies with their moderate capacities (2 Mbyte on a 3.5″-diameter platter is typical) and reasonable transfer rates have provided the crucial function of file/data transfer between isolated machines. All in all, it has been a great half-century of progress and market dominance for magnetic recording devices which only now are beginning to face a potentially serious challenge from the technology of optical recording.
Thermomagnetic recording involves the laser-assisted nucleation and growth of reverse-magnetized domains under the influence of external and/or internal magnetic fields. The nature of nucleation, the speed and uniformity of growth, and the local pinning of domain boundaries due to structural inhomogeneities of the material are among the factors that determine the final shape and size of recorded domains. Since it is important to control domain size, while avoiding nonuniformities and jagged boundaries, and since it is important to create stable domains, a knowledge of magnetization-reversal dynamics, including the effects of the nanostructure within the “amorphous” material, is desirable.
The observed magnetization reversal in RE–TM films is a nucleation and growth process. During bulk reversal (at temperatures not too close to the Curie point Tc) the nucleation sites are mostly reproducible. Rather than being driven by thermal fluctuations, nucleation is believed to be rooted in the nonuniform spatial distribution of the structural and magnetic properties of the media. The measured hysteresis loops generally have high squareness, which indicates that the coercivity of wall motion is less than the nucleation coercivity. In other words, once the nuclei are created at a certain applied field, they continue to grow without further hindrance. The recording and erasure processes rely on the creation and annihilation of reverse-magnetized domains. High densities are achieved when the recorded domains are small and uniform, have smooth boundaries, and are precisely positioned.
The reflection and refraction of light at plane surfaces and interfaces, and the absorption of light by thin-film media play important roles in optical data storage. The reflection of polarized light from the surface of a magnetic material, for instance, is accompanied by a change of the state of polarization, thus carrying information about the state of magnetization of the material. The interaction is known as the magneto-optical (MO) Kerr effect and is generally used in MO data storage systems. The absorption of light by the storage medium and the subsequent creation of a hot spot is another common occurrence in optical recording. Such hot spots are typically needed for effecting the recording process, be it ablation in WORM-type media, structural phase transformation in phase-change media, or thermomagnetic recording in MO media.
Both read and write processes can benefit from the incorporation of the storage layer in a multilayer (optical-interference-type) structure. In the case of readout, the multilayer enhances the signal contrast (or the signal-to-noise ratio) by utilizing constructive interference among the various information-carrying beams reflected at the interfaces. In the case of writing and erasure, antireflection structures improve the sensitivity of the medium and promote efficient utilization of the available laser power. These benefits are not mutually exclusive and, in fact, the multilayers in use today are designed to have good recording sensitivity as well as enhanced readout signal-to-noise ratio.
In thermomagnetic recording a focused laser beam creates a hot spot and allows an external magnetic field to reverse the direction of local magnetization. Erasure is similar to recording, except for a reversing of the external field, which enables the magnetization within the heated region to return to its original state. The technology of high-density magneto-optical data storage owes a large measure of its success to the accuracy, reliability, and repeatability of the thermomagnetic process. The recorded domains are highly regular and uniform, fully reversed, and free from instabilities. What is more, a given area of the storage medium can be erased and rewritten several million times without any degradation.
The present chapter is devoted to the analysis of the thermomagnetic process based on the physical principles of micromagnetics and domain dynamics. In section 17.1 we review some of the facts and experimental observations concerning the storage media and the recording process. This will help familiarize the reader with the variety of phenomena and the order of magnitude of the parameters involved, and will set the stage for in-depth analyses of thermomagnetic recording in subsequent sections. The discussion in section 17.2 revolves around the energetics of domain formation and the forces acting on the domain wall in its formative stages. This treatment of the problem, being based on the arguments described in section 13.3, is essentially a quasi-static treatment that ignores the dynamics of wall motion; moreover, it fails to properly account for the role of coercivity in the recording process.
We start by defining some important terms in digital filter theory.
A sampled data filter is an algorithm for converting a sequence of continuous amplitude analogue samples into another (analogue) sequence. The filter output sequence could then be lowpass filtered to recover a continuous time or analogue signal, and the overall filtering effect will be similar to that of conventional analogue filtering in the sense that we can associate a frequency response with the sampled data filter. In Section 4.1 we develop a formal discrete time linear filter theory based upon the concept of sampled data filtering.
Usually the signal is both sampled and quantized and the filter input is actually a sequence of n-bit words. The filter is then referred to as a digital filter, and it could be implemented in digital hardware or as a software routine on a DSP chip. For generality, the basic digital filter theory developed in Section 4.1 neglects quantization effects and treats the digital filter as a sampled data filter.
A practical example of a true sampled data filter is the CCD transversal filter used for ghost reduction in television receivers. Here, CCD rather than digital technology is sometimes preferred because it offers a long delay time in a small chip area with low power consumption. Examples of digital filters abound. Adaptive digital filtering can be used to reduce the noise level in aircraft passenger compartments (using adaptive acoustic noise cancellation), and adaptive transversal digital filters can be used for equalization of telephone lines.
This book has been written for students who are in the middle or final years of a degree in electronic or communication engineering and who are following a course in signal coding and signal processing techniques. It should also be helpful to engineers and managers in industry who need an introductory or refresher course in the field.
About the book
Many textbooks are devoted to either signal coding, e.g. error control, or to signal processing, e.g. digital filters, simply because there is great breadth and depth in each area. On the other hand, practical systems invariably employ a combination of these two fields and a knowledge of both is often required. For example, a knowledge of digital filtering, fast Fourier transforms (FFTs) and forward error control would often be required when designing a satellite data modem. Similarly, a knowledge of discrete transforms is fundamental to the understanding of some video compression schemes (transform coding), and basic digital filter theory is required for some speech codecs. Also, many undergraduate courses give an introduction to both fields in the middle and final years of a degree.
The philosophy behind this book is therefore to provide a single text which introduces the major topics in signal coding and processing, and to illustrate how these two areas interact in practical systems. The book is a blend of theory and modern practice and is the result of some 12 years lecturing and research experience in these two fields.