We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Where is the frontier of physics? Some would say 10−33 cm, some 10−15 cm and some 10+28 cm. My vote is for 10−6 cm. Two of the greatest puzzles of our age have their origins at this interface between the macroscopic and microscopic worlds. The older mystery is the thermodynamic arrow of time, the way that (mostly) time-symmetric microscopic laws acquire a manifest asymmetry at larger scales. And then there's the superposition principle of quantum mechanics, a profound revolution of the twentieth century. When this principle is extrapolated to macroscopic scales, its predictions seem wildly at odds with ordinary experience.
This book deals with both these ‘mysteries,’ the foundations of statistical mechanics and the foundations of quantum mechanics. It is my thesis that they are related. Moreover, I have teased the reader with the word ‘foundations,’ a term that many of our hardheaded colleagues view with disdain. I think that new experimental techniques will soon subject these ‘foundations’ to the usual scrutiny, provided the right questions and tests can be formulated. Historically, it is controlled observation that transforms philosophy into science, and I am optimistic that the time has come for speculations on these two important issues to undergo that transformation.
In the next few pages I provide previews of the book: Section 1.1 is a statement of the main ideas. Section 1.2 is a chapter by chapter guide.
There are two principal themes: time's arrows and quantum measurement. In both areas I will make significant statements about what are usually called their foundations. These statements are related, and involve modification of the underlying hypotheses of statistical mechanics. The modified statistical mechanics contains notions that are at variance with certain primitive intuitions, but it is consistent with all known experiments.
I will try to present these ideas as intellectually attractive, but this virtue will be offered as a reason for study, not as a reason for belief. Historically, intellectual satisfaction has not been a reliable guide to scientific truth. For this reason I have striven to provide experimental and observational tests where I could, even where such experiment is not feasible today. The need for this hardheaded, or perhaps, intellectually humble, approach is particularly felt in the two areas that I will address. The foundations of thermodynamics and the foundations of quantum mechanics have been among the most contentious areas of physics; indeed, some would deny their place within that discipline. In my opinion, this situation is a result of the paucity of relevant experiment.
In the last chapter we enumerated ‘arrows of time.’ There was a subtheme concerned with which candidates made it to the list, which didn't, trying to eliminate arrows that were immediate consequences of others. Now the subtheme becomes the theme.
We are concerned with correlating arrows of time. Our most important conclusion will be that the thermodynamic arrow of time is a consequence of the expansion of the universe. Coffee cools because the quasar 3C273 grows more distant. We will discuss other arrows, in particular the radiative and the biological, but for them the discussion is a matter of proving (or perhaps formulating) what you already believe. For the thermo/cosmo connection there remains significant controversy.
As far as I know it was Thomas Gold who proposed that the thermodynamic arrow of time had its origins in cosmology, in particular in the expansion of the universe. Certainly there had been a lot of discussion of arrows of time before his proposal, but in much of this discussion you could easily get lost, not knowing whether someone was making a definition or solving a problem. Now I'm sure the following statement slights many deep thinkers, but I would say that prior to Gold's idea the best candidate for an explanation of the thermodynamic arrow was that there had been an enormous fluctuation. If you have a big enough volume and if you wait long enough, you would get a fluctuation big enough for life on earth.
If even some of the ideas presented here are correct, the world is different from what it seems. The major theses of the book, on time's arrows and on quantum measurement theory, are unified by the notion of cryptic constraints. We see, sense, specify, macroscopic states, but what we predict about these states depends on an important assumption concerning their microscopic situation. The assumption is that the actual microstate is equally likely to be any of those consistent with the macrostate. And I say, not so. For various reasons, both classical and quantum, many otherwise-possible microstates are eliminated. As presented in detail in previous chapters, such elimination impacts many areas of physics, from the cosmos to the atom. But we also make the point that this elimination can be difficult to notice and in particular there is no experimental evidence that confirms the usual assumption. By an explicit example on a model (Fig. 4.3.2), we show that a future constraint eliminating 98% of the microstates can go completely unnoticed.
My expectation is that this fundamental change in the foundations of statistical mechanics is needed. Whether or not it takes the forms I've proposed will be determined by future investigations.
In the next section I will review open problems for the program implicit in this book. The tone will be that used in speaking with colleagues: an attempt to be frank about difficulties and a willingness to be wildly speculative.
The existence of ‘special’ states can be established with ordinary quantum mechanics. We seek particular microscopic states of large systems that have the property that they evolve to only one or another macroscopic outcome, when other microstates of a more common sort (having the same initial macrostate) would have given grotesque states. Justifying the hypothesis that Nature chooses these special states as initial conditions is another matter. In this chapter we stick to the narrower issue of whether there exist states that can do the job, irrespective of whether they occur in practice.
In Section 7.1 we give several explicit examples. In Section 6.2 we exhibited an apparatus model and its special states. Here we look at the decay of an unstable quantum state, not as a single degree of freedom in a potential, but with elements of the environment taken into account as well. We also study another popular many-body system, the spin boson model. This has extensive physical applications and especially with respect to Josephson junctions has been used to address quantum measurement questions. A single degree of freedom in a potential, by the way, does not generally lend itself to ‘specializing,’ and an example is shown below.
In recent years, exotic non-local effects of quantum mechanics have been exhibited experimentally. Behind many of these lies entanglement, the property of a wave function of several variables that it does not factor into a product of functions of these variables separately.
Quantum measurement theory addresses several problems. All of them arise from applying a microscopically valid theory to the macroscopic domain. The most famous is the Schrödinger cat example in which the ordinary use of quantum rules suggests a superposition of macroscopically different states, something we do not seem to experience. Another problem is the Einstein-Podolsky-Rosen (EPR) paradox in which a fundamental quantum concept, entanglement, creates subtle and non-classical correlations among remote sets of measurements. Although it seems mere word play to observe that such an apparent micro-macro conflict ought to be viewed as a problem in statistical mechanics—by virtue of the way that discipline is defined—until recently the importance of this observation was seldom recognized.
The founders, Bohr, Schrödinger, Heisenberg, Einstein did not emphasize this direction, but over the years the realization that measurement necessarily involves macroscopic objects—objects with potentially mischievous degrees of freedom of their own—began to be felt. For me this was brought home by the now classic fourth chapter of Gottfried's text on quantum mechanics. He takes up the following problem. The density matrix ρ0 for a normalized pure state is ρ0 = |ψ〉 〈ψ|. It follows that Tr ρ0 = Tr ρ02 = 1. After a non-trivial measurement, the density matrix ρ is supposed to be diagonal in the basis denned by the measured observable, with Tr ρ = 1 but Tr ρ2 < 1.
In our experience, time is not symmetric. From cradle to grave things happen that cannot be undone. We remember the past, predict the future. These arrows, while not always—or even now—deemed suitable for scientific investigation, have been recognized since the dawn of thought. Technology and statistical mechanics give us a precise characterization of the thermodynamic arrow. That's what the previous chapter was about. But the biological arrow (memory, etc.) is elusive. Then we come to arrows that only recently have been recognized. The greatest of these is the fact that the universe is expanding, not contracting. This is the cosmological arrow. Related, perhaps a consequence, is the radiative arrow. Roughly, this is the fact that one uses outgoing wave boundary conditions for electromagnetic radiation, that retarded Green's functions should be used for ordinary calculations, that radiation reaction has a certain sign, that more radiation escapes to the cosmos than comes in. Yet more recently, the phenomenon of CP violation was discovered in the decay of K mesons. As a consequence of CPT invariance, and some say by independent deduction, there is violation of T, time reversal invariance. This CP arrow could be called the strange arrow of time, not only because it was discovered by means of ‘strange’ particles, but because its rationale and consequences remain obscure. There is another phenomenon often associated with an arrow, the change in a quantum system resulting from a measurement.
Why should special states occur as initial conditions in every physical situation in which they are needed? Half this book has been devoted to making the points that initial conditions may not be as controllable as they seem; that there may be constraints on microscopic initial conditions; that this would not have been noticed; that such constraints can arise from two-time or future conditioning; that in our universe such future conditioning may well be present, although, as remarked, cryptic. In this chapter I will take up more detailed questions: what future conditions could give rise to the need for our ‘special’ states, and why should those particular future conditions be imposed.
Before going into this there is a point that needs to be made. Everything in the present chapter could be wrong and the thesis of Chapter 6 nevertheless correct. It is one thing to avoid grotesque states (and solve the quantum measurement problem) by means of special states and it is another provide a rationale for their occurrence. I say this not only to highlight the conceptual dependencies of the theses in this book, but also because there is a good deal of hand waving in the coming chapter and I don't want it to reflect unfavorably on the basic proposal. As pointed out earlier, the usual thermodynamic arrow of time can be phrased as follows: initial states are arbitrary, final states special.
Pure quantum evolution is deterministic, ψ → exp(—iHt/ħ)ψ, but as for classical mechanics probability enters because a given macroscopic initial condition contains microscopic states that lead to different outcomes; the relative probability of those outcomes equals the relative abundance of the microscopic states for each outcome. This is the postulated basis for the recovery of the usual quantum probabilities, as discussed in Chapter 6. In this chapter we take up the question of whether the allowable microstates (the ‘special’ states) do indeed come with the correct abundance. To recap: ‘special’ states are microstates not leading to superpositions of macroscopically different states (‘grotesque’ states). For a given experiment and for each macroscopically distinct outcome of that experiment these states form a subspace. We wish to show that the dimension of that subspace is the relative probability of that outcome.
This is an ambitious goal, especially considering the effort needed to establish that there are any special states—the subject of Chapter 7. As remarked there, the special states exhibited are likely to be only a small and atypical fraction of all special states in the physical apparatus being modeled (e.g., the cloud chamber). In one example (the decay model) there is a remarkable matching of dimension and conventional probability, but I would not make too much of that. What is especially challenging about the present task is that we seek a universal distribution.
In Chapter 6 I presented a proposal for how and why grotesque states do not occur in Nature. In subsequent chapters I explored consequences and found subsidiary requirements, such as Cauchy distributed kicks. To find out whether all or part of our scheme is the way Nature works, we turn to experiment. How to turn to experiment is not so obvious, since the basic dynamical law, ψ → exp(−iHt/ħ)ψ, is the same as for most other theories. Our basic assertion concerns not the dynamical law but the selection of states. Therefore it is that assertion that must be tested. For example, one way is to set up a situation where the states we demand, the ‘special’ states, cannot occur. Then what happens? Another part of our theory is the probability postulate, and this deals not only with the existence of special states but with their abundance. It enters in the recovery of standard probabilities but has far reaching consequences that may well lead to the best tests of the theory. Such tests arise in the context of EPR situations.
The experimental tests fall into the following categories.
Precluding a class of special states. This should prevent a class of outcomes. If the changes in the system (due to precluding the class of special states) do not change the predictions of the Copenhagen interpretation, then this provides a test. In particular, with a class of special states precluded, our theory forbids the associated outcome.
Although the variational principles of classical mechanics lead to two-time boundary value problems, when dealing with the real world everyone knows you should use initial conditions. Not surprisingly, the eighteenth-century statements of classical variational principles were invested with religious notions of teleology: guidance from future paths not taken could only be divine. The Feynman path integral formulation of quantum mechanics makes it less preposterous that a particle can explore non-extremal paths; moreover, it is most naturally formulated using information at two times. In the previous chapter, use of the two-time boundary value problem was proposed as a logical prerequisite for considering arrow-of-time questions. Perhaps this is no less teleological than Maupertuis's principle, except that we remain neutral on how and why those conditions are imposed.
In this chapter we deal with the technical aspects of solving two-time boundary value problems. In classical mechanics you get into rich existence questions—sometimes there is no solution, sometimes many. For stochastic dynamics the formulation is perhaps easiest, which is odd considering that this is the language most suited to irreversible behavior. Our ultimate interest is the quantum two-time problem and this is nearly intractable.
Later in the book I will propose that the universe is most simply described as the solution of a two-time boundary value problem. A natural reaction is to wonder whether this is too constraining. Given that our (lower entropy) past already cuts down the number of possible microstates, are there sufficient microstates to meet a future condition as well?
Things happen. That may seem obvious, but it has also been maintained that all the ‘happening’ does not signify change and that the way of the world is periodic and repetitious:
One generation passeth away, and another generation cometh…
The sun… riseth, and the sun goeth down,
And there is nothing new under the sun.
—Ecclesiastes, Chapter 1
I won't discuss the profound aspects of this passage, but I will do post-industrial age nitpicking. Only in the past century has humanity understood a distinction that exists among these cyclic behaviors. For the 'rising and setting of the sun, there is indeed little that is happening. To a good approximation this is non-dissipative. But as to the coming and going of generations, with the benefit of wisdom gained in building steam engines, we recognize that birth and death can occur only so long as there is a source of negative entropy.
I could continue in this vein and discuss how the failure to distinguish between free and frictional motion confused humanity's greatest minds as they grappled with elementary mechanics. But I wish to begin a technical discussion of irreversibility and only want to draw a lesson of humility from the historical perspective. Until the past few centuries, humanity failed to appreciate the most manifest of time's arrows, the second law of thermodynamics. Unless one realizes that Nature's dynamics are mostly time symmetric one does not know that there is a problem.
In this chapter we return to the discussion of quantum gravity which we began in Chapter 4. In the first section we describe some of the technical problems that are encountered in constructing a theory of quantum gravity and some of the ideas that may go into their resolution. We then give a definition of simplicial gravity in arbitrary dimensions and describe a representative sample of the numerical results that have been obtained. It is often convenient to consider the theory in a fixed dimension larger than two. We shall discuss the four-dimensional case since it is physically the most relevant, and will only occasionally consider three-dimensional gravity.
Basic problems in quantum gravity
Formulating a theory of quantum gravity in dimensions higher than two leads to a number of basic questions, some of which go beyond those encountered in dimension two. Among these are the following:
(i) What are the implications of the unboundedness from below of the Einstein–Hilbert action?
(ii) Is the non-renormalizability of the gravitational coupling a genuine obstacle to making sense of quantum gravity?
(iii) What is the relation between Euclidean and Lorentzian signatures and do there exist analogues of the Osterwalder–Schrader axioms allowing analytic continuation from Euclidean space to Lorentzian space-time?
(iv) What is the role of topology in view, for instance, of the fact that higher-dimensional topologies cannot be classified?
We do not have answers to these questions and our inability to deal with them may be an indication that there exists no theory of Euclidean quantum gravity in four dimensions or, possibly, that quantum gravity only makes sense when embedded in a larger theory such as string theory.
The idea of describing the physical world entirely in terms of geometry has a history dating back to Einstein and Klein in the early decades of the century. This approach to physics had early success in general relativity but the appearance of quantum mechanics guided the development of theoretical physics in a different direction for a long time. During the past quarter of a century the programme of Einstein and Klein has seen a renaissance embodied in gauge theories and, more recently, superstring theory. During this time we have also witnessed the happy marriage of statistical mechanics and quantum field theory in the subject of Euclidean quantum field theory, a development which could hardly have taken place without Feynman's path integral formulation of quantization. In this book we shall work almost exclusively in the Euclidean framework.
The unifying theme of the present work is the study of quantum field theories which have a natural representation as functional integrals or, if one prefers, statistical sums, over geometric objects: paths, surfaces and higher-dimensional manifolds. Our philosophy is to work directly with the geometry as far as possible and avoid parametrization and discretizations that break the natural invariances. In this introductory chapter we give an overview of the subject, put it in perspective and discuss its main ideas briefly.
Lagrangian field theories whose action can be expressed entirely in terms of geometrical quantities such as volume and curvature have a special beauty and simplicity.