To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the non-relativistic limit, the electronic structure of an atom is determined by the Coulomb interaction between the electrons and the nucleus and the Coulomb interaction between the electrons themselves. In the relativistic case, other interactions have to be added, of which the spin–orbit interaction represents the largest contribution. The complete and exact description of these forces in the atom follows from quantum electrodynamics which is nowadays a well-established theory. Therefore, structure studies in atoms as compared to other systems (nuclei or elementary particles) have the advantage of involving forces which are known exactly. However, even for an ideal case it is extremely difficult accurately to calculate the atomic parameters for a many-electron system. As an example the structure of the helium atom in its ground state wavefunction will be discussed, first within the model of independent particles and then for two types of wavefunction which take into account electron correlations, i.e., the correlated motions of the electrons. The fundamental features demonstrated for this relatively simple case can then also be applied to the more complicated dynamical process of photoionization. Here the observed effects of electron–electron interactions and their theoretical treatment brought a renaissance of atomic physics with exciting new insight into the structure and dynamics of atoms interacting with photons, and this aspect will appear in many places throughout the book.
Atomic structure
In order to understand atomic structure, some results from quantum mechanics have to be recalled.
In this chapter the methodology presented in Chapters 3 and 4 is extended to include processing of images using radial masks. The approach produces higher power (improved detectability) in most image processing operations at a small cost of increase in processing time. Also, the radial processing masks are less sensitive to the degree of correlation of the background noise.
Because of the similarity of some of the mathematical developments, many of the details in describing radial mask operations are omitted. The analysis involves the Markov noise model with the general results easily reduced to the independent noise case by replacing the Markov dependence covariance matrix with a diagonal matrix.
In the first part of the chapter, the potential features (line or edge elements) are extracted using a radial version of the masks for the one-way designs. Next, the symmetrical incomplete block design (SBIB) technique is generalized to include radial processing. This results in improvement in the feature extraction process for a fixed alarm rate.
The contrast function approach is also extended to include radial masking techniques. The algorithm is capable of detecting potential features and their locations simultaneously. The decision threshold is determined by the variance of the contrast function and the correlation coefficient of the noise.
The extraction of line features from two-dimensional digital images has been a topic of considerable interest in the past two decades due to its numerous applications in astronomy, remote sensing, and medical fields. For example, typical problems in astronomy are the extraction of streaks corresponding to the trajectory of meteorites or satellites in space. In remote sensing, a major concern is to decipher from satellite images the network of roads and the separation among fields in agriculture. A common problem in both cases is the nature of the scene itself, which is often noisy with a complex background structure.
Among the techniques used in line extraction are those based on the matched filter concept. Under favorable noise conditions, namely, high SNR and i.i.d. Gaussian samples, the matched filter performs well. Unfortunately, typical noise conditions differ from the Gaussian distribution or, even if Gaussian, their variance is unknown. In addition, the SNR is usually not too high and varies unpredictably over the scene under consideration. As a result, a simple thresholding scheme will fail under these conditions.
Finally, the presence of structured backgrounds such as clouds or smoke will often hide parts of the line, and it is important to remove the background interference without affecting the discontinuity of the line.
In this chapter, two different classes of image restoration approaches are presented. Though the stress is placed on images corrupted by so-called “salt and pepper” noise, represented by the mixture distribution model, the methodologies presented here are applicable if the background noise deviates significantly from Gaussian. The image restoration is accomplished in two stages. In the first stage, edge detectors introduced in Chapter 4 are used as preprocessors to establish the local orientation of potential edge points. In the second stage, some form of Robbins–Monro type recursive estimator (see Chapter 8) is applied to remove the undesirable corruption of the image. Alternately, the badly corrupted pixels are replaced by estimated values based on the missing value approach. Based on extensive simulation studies in various noise environments, the edge detection preprocessors that were found to be of practical use are 5 x 5 Graeco-Latin squares (GLS) (see Section 4.5.2) and 6 x 7 Youden squares (YS) (see Section 3.7.2).
Many image restoration procedures, such as averaging and median filters, represent a smoothing process and will cause blurring of the restored image. The averaging filter represents a sample mean and is not robust in salt and pepper noise because of the high variance of the latter.
where e is N (0, σ2). The parameter estimate may be obtained using the least squares approach. Toward that end, we minimize the functional Λ = (y–XT β)T (y–XTβ) for every unknown parameter (β1, β2, … βp) of the vector β. The solution of the normal equation ∂Λ/∂βj = 0 yields the LSE estimator. Under the Gaussian assumption, this leads to the usual F-statistic. This approach is still valid if there are small to moderate deviations from the Gaussian distribution. This can be verified by extensive simulations. Many of them were performed by the authors and some of the senior author's doctoral students. Also, one should consult reference.
In the presence of impulsive noise, the LS approach is no longer applicable, and alternate techniques must be used. In this case, we consider two different situations. In the first case noise contamination is not severe, in which case the parameters of the linear model are estimated using the nonrobustized version of the Robbins–Monro stochastic approximation (RMSA) algorithm. Second, if the background noise is severe, we use the robustized version of the RMSA estimator.
One of the most interesting problems in image processing and computer vision is the detection of specific patterns or objects. The dimensionality of the problem is related to the primary needs of the experiment. For example, the problem is referred to as two-dimensional object detection in satellite picture processing problems and related areas. In this case, what is available is merely a projection of the object on a two-dimensional plane. Three-dimensional object detection relates to the case where multiple projections of the object are available, and one has to make a decision regardless of the viewing position. Examples of the latter class of problems are active vision problems such as those encountered in robotics, where the robot hand is to be directed to specific locations depending on the presence of targeted objects that are actively sought by imaging sensors, such as on-board cameras in the case of a moving robot. The shape of the object is presumably known and stored in a reference database.
The primary intent of such procedures for the detection of two-dimensional objects that are embedded in a noisy environment is a satisfactory performance over a wide range of noise conditions, in contrast to traditional methods that rely on concepts such as the matched filter.
In some applications such as feature detection, the initial step before the detection is the segmentation of the image into various regions to separate the feature from the background. This procedure is commonly referred to as image segmentation. Depending on whether there are single or multiple features, the result is a partition of the image into a certain number of homogeneous regions. Each pixel element of the image is assigned to one of the homogeneous regions. Some criteria of region homogeneity are usually gray level intensity, color, texture, etc. Hence, image segmentation can be regarded as scene classification with respect to some criteria. The process is complicated most of the time by essentially two problems: the nonuniformity of the gray level intensity of the image feature regions and the loss of contrast in some of the regions.
A popular approach to segmentation is based on region growing, which involves the merging of small uniform regions to form large regions without the uniformity of the combined regions being violated. The result of the merging process in this case depends on a suitable uniformity criterion. Some techniques in this area are based on estimation theory. The region-based segmentation procedures are classified into three basic categories: pure splitting, pure merging, and split-and-merge.
The philosophical approaches pursued in this book can be divided into two groups: imaging problems involving relatively high signal-to-noise ratio (SNR) environments and problems associated with environments in which the images are corrupted by severe, usually non-Gaussian noise. The former class of problems led to the development of numerous approaches involving the so-called experimental design techniques The latter class of problems was addressed using techniques based on partition tests (Kurz, chapter III, Kersten and Kurz). The material in this book is based on experimental design techniques; it represents a graduate course offered by the senior author. The book considers the basic notions involving experimental design techniques. It is hoped that, in addition to being a text for a graduate course, the book will generate interest among imaging engineers and scientists, resulting in further development of algorithms and techniques for solving imaging problems.
The basic problems addressed in the book are line and edge detection, object detection, and image segmentation. The class of test statistics used is mainly based on various forms of the linear model involving analysis of variance (ANOVA) techniques in the framework of experimental designs. Though the statistical model is linear, the actual operations involving imaging data are nonlinear.
The problem of the application of the linear model in image processing, as developed by Aron and Kurz, involves the interpretation of the experimental data in terms of effects or treatments. Thus, the initial stage is always the selection of the important features, that is, factors to be taken into account and eventually interpreted following the results of any statistical test based on the linear model. The next stage is the introduction of the hypotheses to be tested based on the model that best fits the objectives, the selected factors, and the available data. Finally, the importance to be attached to the eventual results by means of confidence intervals is delineated.
Test statistics based on the theory of the ANOVA within the context of experimental design have been shown to maximize power for all alternatives among all invariant tests with respect to shifting, scaling, and orthogonal transformation of the data. Used in this context, they are generally referred to as UMP.
Several books are devoted to the subject of the ANOVA and it would be rather meaningless to dwell on the theory in this chapter. Instead, we concentrate on the subject of applying ANOVA in image processing problems by providing simple steps to be followed to extract meaningful information from the available data.
In considering image processing problems, it is commonly required that certain types of basic patterns be extracted from a noisy and/or complex scene. For digital image processing techniques considered in this book, one generally implies the processing of two-dimensional data that leads to applications in such a broad spectrum of problems as picture processing, medical data interpretation, underwater and earth sounding, trajectory detection, radargram enhancement, etc. If the noise conditions are favorable, involving high signal-to-noise ratios (SNR), and the corrupting noise is Gaussian with independent identically distributed (i.i.d.) samples (pixels), the classical techniques based on the matched filter theory are applicable. On the other hand, even a small deviation from the Gaussian assumption or the variability of SNR in various parts of the image will severely deteriorate the performance of the matched filter. In image formation by certain optical systems, such as infrared sensors and detectors, unfavorable noise conditions will prevail. As a matter of fact, the distribution function of the image pixels contaminated by noise is seldom known in imaging problems.
Another difficulty associated with processing of large two-dimensional images is that the SNR may vary significantly from region to region. In the latter situation the use of a simple thresholded matched filter will yield false alarm rates and probabilities of detection that will vary unpredictably over the scene.
One of the central questions of science is: how are complex things made from simple things? In many cases larger systems are more complicated than their smaller subsystems. In biology and chemistry the issue is how to understand large molecules in terms of atoms. In atomic physics one may strive to understand properties of many electron systems in terms of single electron properties. The general theme is interdependency of subsystems, or ‘correlation’.
Correlation may be regarded as a conceptual bridge from properties of individuals to properties of groups or families. In atoms and molecules correlation occurs because electrons interact with one another – the electrons are interdependent. This electron correlation determines much of the structure and dynamics of many electron systems, i.e., how complex electronic systems are made from single electrons. Complexity is the more significant idea, but complexity may be seldom, if ever, understood. Correlation is the key to complexity.
Understandably, much has been done on the correlation of static systems. There are many excellent methods and computer codes to evaluate energies and wavefunctions for complex atomic and molecular systems. However, the dynamics of these many electron systems is less well understood. Hence, the dynamics of electron correlation is a central theme in this book.
The dynamics of electron correlation may affect single electron transitions. However, this effect is sometimes difficult to separate from other effects.
This introductory chapter begins with a review of uncorrelated classical probabilities and then extends these concepts to correlated quantum systems. This is done both to establish notation and to provide a basis for those who are not experts to understand material in later chapters.
Probability of a transition
Seldom does one know with certainty what is going to happen on the atomic scale. What can be determined is the probability P that a particular outcome (i.e. atomic transition) will result when many atoms interact with photons, electrons or protons. A transition occurs in an atom when one or more electrons jump from their initial state to a different final state in the atom. The outcome of such an atomic transition is specified by the final state of the atom after the interaction occurs. Since there are usually many atoms in most systems of practical interest, we can usually determine with statistical reliability the rate at which various outcomes (or final states) occur. Thus, although one is unable to predict what will happen to any one atom, one may determine what happens to a large number of atoms.
1 Single particle probability
A simple basic analogy is tossing a coin or dice. Tossing of a coin is analogous to interacting with an atom. In the case of a simple coin there are two outcomes: after the toss one side of the coin (‘heads’) will either occur or it will not occur.