To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Quantum mechanics emerged as a natural extension of classical mechanics. As physics probed into the microscopic realm, it could be argued it would be almost impossible not to discover quantum mechanics. The spectra of atoms, the blackbody spectra, the photoelectric effect and the behaviour of particles through an array of slits had characteristically non-classical features. These phenomena were waiting their time for a theory to explain them. That does not diminish from the huge scientific insights of the founders of the subject. In physics, the great accomplishments come more often than not from insight rather than foresight. Knowing what will be the right physics 50 years into the future is a game of speculation. Recognising what is the important physics in the present and being able to explain it is the work of scientific insight. Thus, whereas we might say Democritus had great foresight millennia ago to envision the discrete nature of particles, it was Albert Einstein, Max Planck, Niels Bohr, Erwin Schrödinger, Werner Heisenberg, Paul Dirac, Max Born and Wolfgang Pauli who had the insights to develop quantum mechanics. And since their foundational work, our understanding of the physical world grew dramatically like never before.
Chapter 11 is a brief chapter on seldom encountered legal issues: ruses and perfidy. Acts that invite an enemy’s confidence that he is entitled to protection under the rules of LOAC, with an intent to betray that confidence, is the crime of perfidy. It has been a codified war crime since 1907, though seldom prosecuted. False flags of truce, informing an opponent that the war is over so you can come on out, are perfidy, as is fighting in the enemy’s uniform. Feigning being wounded, however, is not perfidy, because it does not invite an enemy’s confidence. Examples in recent years are related: in Columbia against the FARC, in the Falklands against the British. Ruses, on the other hand, are lawful: deceit employed in the interest of military operations for the purpose of misleading the enemy. They do not invite the confidence of the enemy with respect to the protection of LOAC. Dummy artillery pieces, inflatable “tanks,” mock operations by nonexistent troops, all lawful.
Chapter 15 involves much of what has gone before; targeting both objects and human beings, core principles, individual status, and more. Artificial intelligence is described as applied to autonomous weapons, then as applied to LOAC’s core principles – difficult values for autonomous weapons to meet. To whom does criminal liability attach, should such weapons go awry? Designers? Builders? Users? These remain difficult LOAC issues that this chapter examines. Drones and their military use are discussed, including the American CIA’s use. Since CIA personnel are civilians, their involvement in targeting in armed conflict is unlawful, an issue discussed in this chapter. Targeted killing and its lawfulness are examined at length, as well as their relationship to assassination, an illegal act in US law. Targeted killing’s weak link, who decides which individuals should be killed, is also discussed. In the Cases and Materials section, the wrongful shooting down of an Iranian civilian airliner in 1988, that killed 290, is examined – a case study of autonomous weapons gone bad.
Building on what we have discussed in the previous two chapters, we now turn to the problem of dealing with the addition of two angular momenta. For example, we might wish to consider an electron which has both an intrinsic spin and some orbital angular momentum, as in a real hydrogen atom. Or we might have a system of two electrons and wish to know what possible values the total spin of the system can take.
Quantum mechanics describes the behaviour of matter and light at the atomic scale, where physical systems behave very differently from what we experience in everyday life – the laws of physics of the quantum world are different from the ones we have learned in classical mechanics. Despite this ‘unusual’ behaviour, the principles of scientific inquiry remain unchanged: the only way we can access natural phenomena is through experiment; therefore our task in these first chapters is to develop the tools that allow us to compute predictions for the outcome of experiments starting from the postulates of the theory. The new theory can then be tested by comparing theoretical predictions to experimental results. Even in the quantum world, computing and testing remain the workhorses of physics.
Using the commutation relations for the components of the angular momentum, we have found that the allowed eigenvalues for are , where . For each value of 𝑗, the eigenvalues of are , with .
Chapter 12 explores rules of engagement. ROE are neither LOAC nor mentioned in the Geneva Conventions, but they are the important means by which commanders control use of deadly force by their subordinates. First appearing in the 1950’s US-North Korean conflict, they initially made little sense to warfighters. Their formulation is explained step-by-step in this chapter. ROE never limit the exercise of self-defense and they are not tactical in nature – never instructing combatants in how a mission should be executed. Instead, they restrict the use of force in certain circumstances, putting some targets off-limits, restricting the use of force in some locations. Junior officers seldom see the full ROE and troops never do; instead they are given greatly distilled versions, often on pocket-size cards. Before a US infantryman may fire in Afghanistan, for example, his ROE require that he observe “hostile intent,” or actually be the target of a “hostile act,” terms explained in this chapter. Other targets must be positively identified. It is understandable that infantrymen dislike ROE, but they are essential to the commander’s operational plans and helpful in the observance of LOAC.
This chapter surveys least squares and related methods for designing inverse, prediction, and interpolation filters. The name Wiener filter is associated with these methods. Correlations and autocorrelations play a central role in forming normal equations for filter coefficients. The underlying theory assumes that true correlations are available, but in practice correlations and autocorrelations are estimated from data. We will find the surprising result that a least squares inverse to a linear filter with impulse response can be found from its autocorrelation, without knowledge of the impulse response itself. The least squares inverse filter is closely related to a filter for predicting future values of a time series. Similar ideas may be used to design interpolation filters and to develop linear filter models for time series. This chapter also reviews the correlation filters used in geophysical systems such as radar, sonar, and exploration seismology. We also discuss how correlation filtering is the essential element enabling navigation with the Global Positioning System (GPS).
A filter may be a physical system or computational algorithm with an input and output. If the filter is linear, then the relationship between input and output is the same regardless of the amplitude of the input. Throughout this chapter and the entire book we consider only time-invariant linear filters, that is, linear filters whose properties do not change with time. As a consequence, linear systems and filters obey a superposition principle, so that when two inputs are added together the output is the sum of the separate outputs that would result from separate inputs. Another consequence is that a single-frequency sinusoidal input produces a sinusoidal output at exactly the same frequency and no other. As a result, linear systems and filters are preferred models for physical processes and for data processing because they allow analysis and implementation in both the frequency and time domains. The DFT presented in theis the main tool for frequency domain analysis and implementation. This chapter develops important elements used in time domain implementation: digital filter equations and discrete convolution; the transfer function; and the impulse response. These concepts are extended to the properties of a cascade of linear filters (the successive application of several different filters to a time series) and are used to define the concept of an inverse filter. Example applications of linear filters in data processing, as models of physical processes, and in methods for finding practical inverse filters appear inand .
The power spectrum describes how the variance of a time series is distributed over frequency. The variance (mean squared signal value per time sample) is a broadband statistic measuring power, while the power spectrum at a given frequency is a statistic measuring the power in a narrow frequency band. Similarly, the coherence spectrum describes how the broadband correlation coefficient between two time series varies with frequency. The DFT is the main tool for estimating both the power and coherence spectra and will be our main focus, but we also compare the DFT (periodogram) results with estimates made using the prediction error filter (PEF) developed in . Using examples we show that PEF estimates tend to be smooth, but the choice of PEF order introduces some variability in these estimates. Periodogram spectrum estimates tend to be erratic but can be tamed at the expense of diminished frequency resolution. We describe standard methods of assigning confidence intervals to periodogram spectrum estimates.
Chapter 20 studies gas, biological, chemical, and nuclear weapons, looking at each of those prohibited weapons in turn. Poisonous gases were first banned in 1925 – but only their use, not their production or sale. There have been numerous uses of poisonous gases, including today in Syria. Biological and toxin weapons were banned in 1972, although their use is unmentioned in the 1972 Convention. A 1993 Chemical Weapons Convention, with a strict verification process, has been more effective. Russia’s continued poisoning of state critics is detailed as an example of state-avoidance of the 1993 Convention. CS (“tear gas”) use is limited by the 1993 convention but still employed as a riot control measure. There is no international law that makes nuclear weapons unlawful. Of the several multinational treaties that bear on nuclear weapons, the most significant is the 2017 Treaty on the Prohibition of Nuclear Weapons, which, so far, has but forty-four State Parties. None of the nine nuclear powers have joined, of course. In short, gas, biological, chemical, and nuclear weapons are abhorred and condemned until they are used; then the international community looks the other way.
There are many systems in nature that are made up of several particles of the same species. These particles all have the same mass, charge and spin, and need to be treated as identical particles. For instance, the electrons in an atom are identical particles. Identical particles cannot be distinguished by measuring their intrinsic properties. While this is also true for classical particles, the laws of classical mechanics allow us to follow the trajectory of each individual particle, i.e. their time evolution in space.
The least squares method is among the most widely used data analysis and parameter estimation tools. Its development is associated with the work of Gauss, the nineteenth century German geophysicist, who introduced many innovations in computation, geophysics, and mathematics, most of which continue to be in wide use today. We will introduce least squares from two viewpoints, one based on probability arguments and considering the data to be contaminated with random errors and the other based on a linear algebra viewpoint and involving the solution of simultaneous linear equations. We will employ these two viewpoints to develop the least squares approach, using, as an example, the fitting of polynomials to a time series. While this might appear to be a departure from linear filters and related topics, we show in subsequent chapters that in fact least squares serves as a powerful digital filter design tool. We will also find that a stochastic viewpoint (in which time series values are considered to be random variables; see ) leads also to the use of least squares in the development of prediction and interpolation filters.