To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we discuss the decision-theoretic framework of statistical estimation and introduce several important examples. Section 28.1 presents the basic elements of statistical experiment and statistical estimation. Section 28.3 introduces the Bayes risk (average-case) and the minimax risk (worst-case) as the respective fundamental limits of statistical estimation in Bayesian and frequentist settings, with the latter being our primary focus in this part. We discuss several versions of the minimax theorem (and prove a simple one) that equates the minimax risk with the worst-case Bayes risk. Two variants are introduced next that extend a basic statistical experiment to either large sample size or large dimension: Section 28.4 on independent observations and Section 28.5 on tensorization of experiments. Throughout this chapter the Gaussian location model (GLM), introduced in Section 28.2, serves as a running example, with different focus at different places (such as the role of loss functions, parameter spaces, low versus high dimensions, etc.). In Section 28.6, we discuss a key result known as Anderson’s lemma for determining the exact minimax risk of (unconstrained) GLM in any dimension for a broad class of loss functions, which provides a benchmark for various more general techniques introduced in later chapters.
In Chapter 12, we shall examine results for a large class of processes with memory, known as ergodic processes. We start this chapter with a quick review of the main concepts of ergodic theory, then state our main results: Shannon–McMillan theorem, compression limit, and asymptotic equipartition property (AEP). Subsequent sections are dedicated to proofs of the Shannon–McMillan and ergodic theorems. Finally, in the last section we introduce Kolmogorov–Sinai entropy, which associates to a fully deterministic transformation the measure of how “chaotic” it is. This concept plays a very important role in formalizing an apparent paradox: large mechanical systems (such as collections of gas particles) are on the one hand fully deterministic (described by Newton’s laws of motion) and on the other hand have a lot of probabilistic properties (Maxwell distribution of velocities, fluctuations, etc.). Kolmogorov–Sinai entropy shows how these two notions can coexist. In addition it was used to resolve a long-standing open problem in dynamical systems regarding isomorphism of Bernoulli shifts.
This chapter discusses perturbation theory, applied to the λϕ4 model, with a focus ondimensional regularization. It characterizes different types of Feynman diagrams. Weexplain the meaning of renormalization and discuss the conditions forrenormalizability.
This chapter deals with global symmetries and their spontaneous breaking, particularlyreferring to sigma-models. We consider two theorems about the emergence of Nambu–Goldstonebosons. This takes us to the structure of low-energy effective theories, and to thehierarchy problem in the Higgs sector of the Standard Model. In that context, we furtheraddress triviality, the electroweak phase transition in the early Universe, and theextension to two Higgs doublets.
In Chapter 14 we first define a performance metric giving a full description of the binary hypothesis testing (BHT) problem. A key result in this theory, the Neyman–Pearson lemma, determines the form of the optimal test and at the same time characterizes the given performance metric. We then specialize to the setting of iid observations and consider two types of asymptotics: Stein’s regime (where type-I error is held constant) and Chernoff’s regime (where errors of both types are required to decay exponentially). In this chapter we only discuss Stein's regime and find out that fundamental limit is given by the KL divergence. Subsequent chapters will address the Chernoff's regime.
This chapter provides an extensive discussion of Grand Unified Theories (GUTs) andrelated subjects. It begins with the SU(5) GUT, its fermion multiplets, and the resultingtransitions between leptons and quarks, which enable in particular proton decay. In thiscontext, we discuss the baryon asymmetry in the Universe, as well as possible topologicaldefects dating back to the early Universe, according to the Kibble mechanism, such asdomain walls, cosmic strings, or magnetic monopoles. That takes us to a review of Diracand ‘t Hooft–Polaykov monopoles, Julia–Zee dyons, and the effects named afterCallan–Rubakov and Witten. Next we discuss fermion masses in the framework of the GUTswith the gauge groups SU(5) and Spin(10). Then we consider small unified theories (withoutQCD) with a variety of gauge groups. Finally, we summarize the status and prospects of theGUT approach.
Our discussion of Beauvoir’s theory introduced the possibility of a tyrant who valued dominating others, not as a means to realizing other values, but rather as an ultimate end. Such a figure, you might have thought, appears only in works of fiction as the personification of evil. Yet he is a model of nobility in Nietzsche’s philosophy. Indeed, Beauvoir, in remaking existentialist ethics into a teleological theory, took Nietzsche as an arch opponent whose glorification of man’s will to power, to use his famous trope, nudged existentialism into solipsism. But in this regard she was mistaken. Nietzsche’s extolling of powerful, masterful men as the highest specimens of humanity was neither grounded in nor a springboard for solipsism. Rather it was a distinct echo of Thrasymachus’ views in the first book of the Republic. Like Thrasymachus, Nietzsche separated mankind into the few who were strong and the many who were weak, and like Thrasymachus too Nietzsche had only contempt for the latter and for their appeal to justice as a leveler that raises their fortunes and lowers the fortunes of the former. But unlike Thrasymachus, Nietzsche did not give an argument for his belief that the truly admirable man lives free of the restraints of justice and the other requirements of morality that would keep his desires for self-advancement in check. Thrasymachus’ error was to yield to Socrates’ insistence that he explain his views and submit them to an examination. His error was to put himself on the plane of reason, so to speak, where he was outmaneuvered by Socrates. Nietzsche took a different tack.
First, non-Abelian gauge fields are quantized canonically. The Faddeev–Popov ghostfields implement gauge fixing, then we review the BRST symmetry. Next, we proceed to thelattice regularization and then from Abelian to non-Abelian gauge fields. We stress thatthe compact lattice functional integral formulation does not require gauge fixing.
We construct mass terms for the Standard Model fermions of the first generation. Thisincludes the neutrino, where we invoke either a dimension-5 term or we add a right-handedneutrino field. We reconsider the CP symmetry, the fate of baryon and lepton numbers, andthe quantization of the electric charge. The question of the mass hierarchy takes us tothe seesaw mass-by-mixing mechanism. As a peculiarity, we finally revisit such propertiesin the scenario without colors (Nc=1), whichallows leptons and baryons to mix.
In the previous chapter we introduced the concept of variable-length compression and studied its fundamental limits (with and without the prefix-free condition). In some situations, however, one may desire that the output of the compressor always has a fixed length, say, k bits. Unless k is unreasonably large, then, this will require relaxing the losslessness condition. This is the focus of Chapter 11: compression in the presence of (typically vanishingly small) probability of error. It turns out allowing even very small error enables several beautiful effects: The possibility to compress data via matrix multiplication over finite fields (linear compression). The possibility to reduce compression length if side information is available at the decompressor (Slepian–Wolf). The possibility to reduce compression length if access to a compressed representation of side information is available at the decompressor (Ahlswede–Körner–Wyner).
Could a European swallow fly a coconut from the African continent to the British Isles? You can think of bagging as convening a committee of general experts to answer some questions, perhaps questions involving aerodynamics (the study of flying), carpology (the study of seeds and fruits like coconuts), and ornithology (the study of birds like swallows).
In Chapter 19 we apply methods developed in the previous chapters (namely the weak converse and the random/maximal coding achievability) to compute the channel capacity. This latter notion quantifies the maximal amount of (data) bits that can be reliably communicated per single channel use in the limit of using the channel many times. Formalizing the latter statement will require introducing the concept of a communication channel. Then for special kinds of channels (the memoryless and the information-stable ones) we will show that computing the channel capacity reduces to maximizing the (sequence of the) mutual information. This result, known as Shannon’s noisy channel coding theorem, is very special as it relates the value of a (discrete, combinatorial) optimization problem over codebooks to that of a (convex) optimization problem over information measures. It builds a bridge between the abstraction of information measures (Part I) and practical engineering problems.
Eudaimonism was the dominant theory in ancient Greek ethics. The name derives from the Greek word ‘eudaimonia’, which is often translated as ‘happiness’ but is sometimes translated as ‘flourishing.’ Many scholars in fact prefer the latter translation because they believe it better captures the concern of the ancient Greeks with the idea of living well. This preference suggests that a useful way of distinguishing between eudaimonism and egoism is to observe, when formulating their fundamental principles, the distinction between well-being and happiness that we drew in Chapter 2. Accordingly, the fundamental principle of eudaimonism is that the highest good for each person is his or her well-being; the fundamental principle of egoism remains, as before, that the highest good for a person is his or her happiness. Admittedly, this way of distinguishing between the two theories would be theoretically pointless if the determinants of how happy a person was were the same as the determinants of how high a level of well-being the person had achieved. Thus, in particular, when hedonism is the favored theory of well-being, this way of distinguishing between eudaimonism and egoism comes to nothing. It fails in this case to capture any real difference between them. For when hedonism is the favored theory of well-being, determinations of how happy a person is exactly match the determinations of how high a level of well-being a person has achieved.
Both egoism and eudaimonism share an outlook of self-concern. They both identify the perspective from which a person judges what ought to be done as that of someone concerned with how best to promote his own good. On either theory, then, the highest good for a person is that person’s own good, whether this be his own happiness or his own well-being. Hence, on either theory, ethical considerations are understood to have the backing of reason insofar as they help to advance this good.