To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A common situation in physics is that in investigating phenomena on a certain distance scale, one sees no hint of those phenomena that happen at much shorter distance scales. In a classical situation this observation seems evident. For example, one can treat fluid dynamics without any knowledge of the atomic physics that generates the actual properties of the fluids. However, in a quantum field theory this decoupling of short-distance phenomena from long-distance phenomena is not self-evident at all.
Consider an e+ — e- annihilation experiment at a center-of-mass energy well below 10GeV, the threshold for making hadrons containing the b-quark. There is, for practical (or experimental) purposes, no trace of the existence of this quark. However, the quark is present in Feynman graphs as a virtual particle, and can have an apparently significant effect on cross-sections. Our task in this chapter is therefore to prove what is known as the decoupling theorem. This states that a Feynman graph containing a propagator for a field whose mass is much greater than the external momenta of the graph is in fact suppressed by a power of the heavy mass. The physics at low energy is described by an effective low-energy theory that is obtained by deleting all heavy fields from the original theory.
It is important to show that renormalization of a gauge theory can be accomplished without violating its gauge invariance. Gauge invariance is physically important; among other things it is used (via the Ward identities) to show that the unphysical states decouple ('t Hooft (1971a)).
In Chapter 9 we considered the case that the basic Lagrangian of a theory is invariant under a global symmetry, as opposed to a gauge symmetry, such as we will be investigating in this chapter. We showed that the counterterm Lagrangian is also invariant under the symmetry. Suppose now that the basic Lagrangian is invariant under a gauge symmetry. One might suppose that the counterterms are also invariant under the symmetry, just as for a global symmetry. This is not true, however, since the introduction of gauge fixing (as explained in Sections 2.12 and 2.13) destroys manifest gauge invariance of the Lagrangian. One might instead point out that the theory with gauge fixing is BRS invariant and deduce that the counterterms are BRS invariant. This deduction is false. To see this, we recall that an ordinary internal symmetry relates Green's functions with certain external fields to other Green's functions differing only by change of symmetry labels. However, BRS symmetry relates a field to a composite field (2.13.1).
In the previous chapters we set up renormalization theory in momentum space. In this chapter, we will give a treatment in coordinate space. Now, the utility of a momentum-space description, such as we gave in the earlier chapters, comes from the translation in variance of a problem. However, the momentum-space formulation rather obscures the fact that UV divergences arise from purely short-distance phenomena. Thus a coordinate-space treatment is useful from a fundamental point of view. There are also a number of situations, essentially external field problems, where a coordinate-space treatment is the most appropriate from a more practical point of view. A particular advantage is that the coordinate-space method makes it easy to see that the counterterms are the same as with no external field.
An important case, which we will treat in detail in this chapter, is that of thermal field theory at temperature T (Fetter & Walecka (1971), Bernard (1974), and references therein). There one works with imaginary time using periodic boundary conditions (period 1/T).
It is first necessary to work out the short-distance singularities of the free propagator. This we will do in Section 11.1. A number of forms of the propagator will be given, whose usefulness will become apparent when we treat some examples in Section 11.2.
As we saw in Chapter 3, the renormalization procedure has considerable arbitrariness: the counterterm for a graph must cancel its divergence but may contain any amount of finite part. A rule for choosing the value of the counterterm we called a renormalization prescription. In one-loop order it was clear from the examples that a change in renormalization prescription can be cancelled by a change in the finite, renormalized couplings corresponding to each divergence. Thus a change in renormalization prescription does not change the theory but only the parametrization by renormalized coupling and mass. What is not so easy is to see that this property is true to all orders. This we will show in Section 7.1. The invariance of the theory under such transformations is called renormalization-group (RG) invariance.
A particularly useful type of change of renormalization prescription is to change the renormalization mass μ. Infinitesimal changes are conveniently described by a differential equation, called the renormalization-group equation, which is derived in Section 7.3. This leads to the concept of the effective momentum-dependent coupling. This concept is very useful in calculations of high-energy behavior, as explained in Section 7.4. The coefficients in the renormalization-group equation are called the renormalization-group coefficients and are important properties of a theory.
Most of the work in this book will be strictly perturbative. However it is important not to consider perturbation theory as the be-all and end-all of field theory. Rather, it must be looked on only as a systematic method of approximating a complete quantum field theory, with the errors under control. So in this chapter we will review the foundations of quantum field theory starting from the functional integral.
The purpose of this review is partly to set out the results on which the rest of the book is based. It will also introduce our notation. We will also list a number of standard field theories which will be used throughout the book. Some examples are physical theories of the real world; others are simpler theories whose only purpose will be to illustrate methods in the absence of complications.
The use of functional integration is not absolutely essential. Its use is to provide a systematic basis for the rest of our work: the functional integral gives an explicit solution of any given field theory. Our task will be to investigate a certain class of properties of the solution.
For more details the reader should consult a standard textbook on field theory.
In this chapter I shall tie up a few loose ends and then have a look at the broader implications of all the detailed and complicated work I have described.
First I shall deal with quark confinement. The enormous accelerators at CERN and Fermilabs have not produced quarks in any considerable numbers; in fact most people doubt if any have been produced. So quarks are at least very difficult to extract from nucleons. They are bound (or confined) very strongly in them. Some eminent theoreticians think they are absolutely confined, others are not so sure. But, at least, they are confined to a very considerable extent. Yet when we examine the quarks inside a proton or neutron they appear to be moving around freely – ‘asymptotic freedom’, it is called. How can this be? The answer turned out to be easy, but surprising.
The first force field physicists learned about was gravity. Now the force between two gravitating objects decreases as we move them apart. It is a rapid decrease, getting smaller as the square of the distance between them. If we double the distance, the force is less by a factor of four. When the electrical force was investigated it was found to obey a similar law – the force between two electrical charges varies inversely as the square of the distance between them. There is a difference between the two fields: gravity is always attractive, while the electrical force can be either attractive or repulsive, depending on whether we are dealing with unlike or like charges. But whether it is attractive or repulsive it is always ‘inverse square’.
In December 1947 I was invited to a small conference in the Dublin Institute of Advanced Studies. The Dublin Institute had been founded by the Irish government before World War II. The Irish saw the plight of Jewish and liberal scientists in Central Europe at that time. Eamon de Valera, the Taoiseach, a mathematician, saw he could help some of the scientists, and his own country, at the same time. So the Institute was started, with two schools, one for Theoretical Physics and one for Celtic Studies. Both have been very successful.
The Theoretical Physics school was established around two of the leaders of the early days of quantum mechanics, Erwin Schrödinger and Walter Heitler. After World War II de Valera decided to extend his very successful Institute by the addition of a School of Cosmic Physics, comprising three sections, meteorology, astronomy and cosmic radiation. He asked Lajos Janossy, another refugee who had spent World War II working in P. M. S. Blackett's laboratory in Manchester, to take charge of the cosmic ray section. Janossy had been one of my supervisors when I took my Master's degree at Manchester and he asked me would I be interested in the Assistant Professorship. So I went to Dublin to be inspected (and to inspect) as well as to attend the small conference.
One of the leading speakers at the conference was George Rochester, my other supervisor. He showed two Wilson cloud chamber pictures that he and Clifford Butler had taken with Blackett's magnetic cloud chamber.
When Gell-Mann and Zweig put forward their quark hypothesis, the maximum energy available at accelerators was 30 GeV (at Brookhaven National Laboratory, New York). This machine accelerated protons to this energy and then collided them against a stationary target. Machines of this type have now (May 1982) reached energies of 500 GeV (NAL) and 300 GeV (CERN). NAL is the National Accelerator Laboratory (often called Fermilab) at Batavia, Illinois and CERN is the Centre Européenne pour la Recherche Nucleaire at Geneva, Switzerland. But there is a class of accelerator, called colliding beam accelerators, in which, instead of colliding an accelerated beam of particles with a stationary target, we collide a beam with a beam. The first large machine of this type is the Intersecting Storage Rings (ISR) device at CERN. It collides two beams of 30 GeV protons. Because of relativity effects this is equivalent to colliding a beam of ˜ 2000 GeV protons with a stationary target. Recently, CERN has brought its SPS device into operation. This collides two beams, one of protons, the other of anti-protons, at energies of 250 GeV each. This is equivalent to an energy rather greater than 100000 GeV in a machine with a stationary target.
In 1963, 30 GeV seemed a large energy – quite enough to liberate a quark from a proton. One method of checking the quark hypothesis was to collide the protons and search the debris for fractionally-charged particles, with their characteristic ‘low ionisation’ signature. Another method was to examine the protons, looking for their internal structure, in the same way that Rutherford and his co-workers had looked at the structure of atoms.