To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A dynamic programming problem is an optimization problem in which decisions have to be taken sequentially over several time periods. To make the problem non-trivial, it is usually assumed that periods are “linked” in some fashion, viz., that actions taken in any particular period affect the decision environment (and thereby, the reward possibilities) in all future periods. In practice, this is typically achieved by positing the presence of a “state” variable, representing the environment, which restricts the set of actions available to the decision-maker at any point in time, but which also moves through time in response to the decision-maker's actions. These twin features of the state variable provide the problem with “bite”: actions that look attractive from the standpoint of immediate reward (for instance, a Carribean vacation) might have the effect of forcing the state variable (the consumer's wealth or savings) into values from which the continuation possibilities (future consumption levels) are not as pleasant. The modelling and study of this trade-off between current payoffs and future rewards is the focus of the theory of dynamic programming.
In this book, we focus on two classes of dynamic programming problems—Finite-Horizon Markovian Dynamic Programming Problems, which are the subject of this chapter, and Infinite-Horizon Stationary Discounted Dynamic Programming Problems, which we examine in the next chapter.
The principal complication that arises in extending the results of the last chapter to dynamic programming problems with an infinite horizon is that infinite-horizon models lack a “last” period; this makes it impossible to use backwards induction techniques to derive an optimal strategy. In this chapter, we show that general conditions may, nonetheless, be described for the existence of an optimal strategy in such problems, although the process of actually deriving an optimal strategy is necessarily more complicated. A final section then studies the application of these results to obtaining and characterizing optimal strategies in the canonical model of dynamic economic theory: the one-sector model of economic growth.
Description of the Framework
A (deterministic) stationary discounted dynamic programming problem (henceforth, SDP) is specified by a tuple {S, A, Φ, f, r, δ}, where
S is the state space, or the set of environments, with generic element s. We assume that S ⊂ ℝn for some n.
A is the action space, with typical element a. We assume that A ⊂ ℝk for some k.
Φ: S → P(A) is the feasible action correspondence that specifies for each s ∈ S the set Φ(s) ⊂ A of actions that are available at s.
f : S × A → S is the transition function for the state, that specifies for each current state-action pair (s, a) the next-period state f(s, a) ∈ S.
I remember that when someone had started to teach me about creation and annihilation operators, that this operator creates an electron, I said ‘How do you create an electron? It disagrees with conservation of charge’.
R. P. Feynman Nobel lecture
The real Klein-Gordon field
We considered in Chapter 2 the simplest relativistic equation, the Klein- Gordon equation, as a single-particle equation, and found the following difficulties with it (i) the occurrence of negative energy solutions, (ii) the current jμ does not give a positive definite probability density ρ, as the Schrodinger equation does. For these reasons we must abandon the interpretation of the Klein-Gordon equation as a single-particle equation. (Historically, this was the motive which led Dirac to his equation.) Can any sense then be made out of the Klein-Gordon equation? After all, spin 0 particles do exist (π, K, η, etc.) so, surely, there must be some interpretation of the equation which makes sense.
What we shall do first is to consider the Klein-Gordon equation as describing a field φ(x). Since the equation has no classical analogue, φ(x) is a strictly quantum field, but nevertheless we shall begin by treating it as a classical field, as we did in the last chapter, and shall find that the negative energy problem does not then exist. We shall then take seriously the fact that φ(x) is a quantum field by recognising that it should be treated as an operator, which is subject to various commutation relations analogous to those in ordinary quantum mechanics.
If the doors of perception were cleansed everything would appear as it is, infinite.
William Blake, The Marriage of Heaven and Hell
We have seen in previous chapters that integration over internal loops in Feynman diagrams gives divergent results. Since our approach to field theory is based on perturbation theory, however, it is imperative that we make sense of the perturbation series – and that series is one in which higher order terms involve more and more internal integrations, and therefore the possibility of increasing degrees of divergence. It is obvious that, in order for a field theory to be at all sensible or believable, the problems raised by the divergences must be satisfactorily resolved. In this chapter we show how this is done for φ4 theory, electrodynamics (QED) and Yang-Mills theories (QCD). Our general approach is to proceed order by order in perturbation theory (actually in the loop expansion – see below), and show that at each order the quantities of physical interest (masses, coupling constants, Green's functions) can be renormalised to finite values. Then (for QED and QCD) we show that this is, in principle, possible to all orders', these theories are therefore renormalisable. (So is φ4 theory, but we do not prove that.) We begin with φ4 theory.
Divergences in φ4 theory
We saw in Chapter 6 that Δ(x – x) = Δ(0) is a divergent quantity, which modifies the free particle propagator and contributes to the self-energy.
For many years it was speculated that the ultimate state of subdivision of matter was discrete indivisible particles termed atoms. In the early nineteenth century John Dalton gave quantitative support to these ideas by assigning a self-consistent set of masses to atoms of elements. Elements are substances which cannot be decomposed by ordinary types of chemical change, or made by chemical union. However, subsequent work by Faraday, Crookes and Goldstein pointed out the electrical nature of atoms and that they are divisible. The simplest picture of an atom now shows a central nucleus surrounded by a cloud of electrons.
Electrons
When a neutral atom is supplied with sufficient energy it will ionise yielding positively and negatively charged fragments. The former, the positive ion, is characteristic of the particular parent atom, whilst the negative fragment, the electron, is the same irrespective of the parent atom. The charge and mass of electrons can be determined by studying their behaviour in electrical and magnetic fields. Early experiments revealed the ratio of charge to mass, e/m, which for all electrons is 1.7588 × 108 coulombs per gram (Cg−1).
To evaluate the mass of an electron, experiments were carried out to determine the applied potential required to prevent oil droplets carrying negative charges from settling due to gravity.
Our knowledge about the elemental composition of the earth is restricted to the composition of the crust, atmosphere and oceans. Although the mantle and core represent over 99% of the earth's mass, their elemental compositions are not accurately known. Hence natural abundances of elements quoted in Table 3.1 refer only to the crust. Since the crust (35 km average thickness under continents and 10km under oceans) is not homogeneous and different parts of the earth contain different minerals, these values are averages of a large number of estimations. The relative homogeneity of ocean waters (Tables 6.3 and 6.4) and the atmosphere (Table 3.2) renders their compositions far less variable for major components.
Oxygen is the major element in the crust, with silicon coming second in abundance. Formation processes may have enriched the crust with certain elements in comparison with the mantle and core. The water and air masses are very different in composition from the crust, and, although elemental abundances may vary slightly with water depth or atmospheric height, the overall relative proportions of the major elements remain almost constant in the oceans and in the lower and middle atmosphere.
Although many problems in environmental chemistry are related to the abundance of elements in the environment, the chemical behaviour and properties of elements are largely independent of their abundances.
The most important change that has been made for this edition is the addition of a chapter on supersymmetry. It was approximately twenty years ago that supersymmetry burst on the scene of high energy physics. Despite the fact that there is still almost no experimental evidence for this symmetry, its mathematical formulation continues to have appeal to many theoretical physicists justifying, I think, the inclusion of a chapter on supersymmetry in an introductory text. Beyond this, I have rewritten a few sections of the book and incorporated a large number of corrections. I am particularly grateful to Messrs Chris Chambers, Halvard Fausk, Stephen Lyle, Michael Ody, John Smith and Gerhard Soff for pointing out errors and misconceptions in the first edition. The impetus to prepare this second edition owes a lot to the encouragement and friendly advice of Rufus Neal of Cambridge University Press, to whom I should like to express my thanks. Finally, I should like to express my gratitude to Mrs Janet Pitcher for so expertly typing the new material for this edition.
All four authors of this book are involved in teaching and/or research in environmental science and ecology. In the course of this work, we have found no shortage of advanced books in specialised aspects of our subject, and the availability of teaching texts on specific aspects of environmental chemistry has improved markedly in recent years. There does, however, in our view continue to be a need for a basic book covering the chemistry necessary to comprehend more specialised books on chemical aspects of environmental science and ecology. This book is intended to fulfil that need.
In preparing the book, we have been struck by the enormous range of aspects of chemistry which are involved in studying the environment; all major branches of the subject are involved to some degree. This has made our task more difficult and explains the involvement of such a large band of authors, needed to give expert coverage to all the topics included. Although specific individuals have prepared first drafts of whole or part chapters, these have been reviewed by all other authors, and the finished work is a group effort.
The book is not aimed at specialist chemists, although we hope that they may find some of the more applied material useful. Rather, it is aimed at students specialising in environmental science or ecology, and requiring a grounding in basic chemical concepts to make chemical aspects of their studies accessible.