To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Free fermion fields are canonically quantized, proceeding from Weyl to Dirac andMajorana fermions, and from the massless to the massive case. We discuss properties likechirality, helicity, and the fermion number, as well as the behavior under parity andcharge conjugation transformation. Fermionic statistics is applied to the cosmic neutrinobackground.
Scalar quantum field theory is introduced in the functional integral formulation,starting from classical field theory and quantum mechanics. We consider Euclidean time andrelate the system in the lattice regularization to classical statistical mechanics.
Consider the following problem: Given a stream of independent Ber(p) bits, with unknown p, we want to turn them into pure random bits, that is, independent Ber(1/2) bits. Our goal is to find a universal way to extract the most number of bits. In other words, we want to extract as many fair coin flips as possible from possibly biased coin flips, without knowing the actual bias. In 1951 von Neumann proposed the following scheme: Divide the stream into pairs of bits, output 0 if 10, output 1 if 01, otherwise do nothing and move to the next pair. Since both 01 and 10 occur with probability pq, regardless of the value of p, we obtain fair coin flips at the output. To measure the efficiency of von Neumann’s scheme, note that, on average, we have 2n bits in and 2pqn bits out. So the efficiency (rate) is pq. The question is: Can we do better? It turns out that the fundamental limit (maximal efficiency) is given by the entropy $h(p)$. In this chapter we discuss optimal randomness extractors, due to Elias and Peres respectively, and several related problems.
In Chapter 6 we start with explaining the important property of mutual information known as tensorization (or single-letterization), which allows one to maximize and minimize mutual information between two high-dimensional vectors. Next, we extend the information measures discussed in previous chapters for random variables to random processes by introducing the concepts of entropy rate (for a stochastic process) and mutual information rate (for a pair of stochastic processes).
Here is the game. You are presented with 3 doors. You can’t see what is behind them, but you have been truthfully told that behind 1 door is a fabulous luxury car, which you want. Behind the other 2 doors are goats, which you don’t want. You get to choose any 1 of the doors and receive the prize behind it.
This enthusiastic introduction to the fundamentals of information theory builds from classical Shannon theory through to modern applications in statistical learning, equipping students with a uniquely well-rounded and rigorous foundation for further study. The book introduces core topics such as data compression, channel coding, and rate-distortion theory using a unique finite blocklength approach. With over 210 end-of-part exercises and numerous examples, students are introduced to contemporary applications in statistics, machine learning, and modern communication theory. This textbook presents information-theoretic methods with applications in statistical learning and computer science, such as f-divergences, PAC-Bayes and variational principle, Kolmogorov’s metric entropy, strong data-processing inequalities, and entropic upper bounds for statistical estimation. Accompanied by additional stand-alone chapters on more specialized topics in information theory, this is the ideal introductory textbook for senior undergraduate and graduate students in electrical engineering, statistics, and computer science.
There are four fundamental optimization problems arising in information theory: I-projection, maximum likelihood, rate distortion, and capacity. In Chapter 5 we show that all these problems have convex/concave objective functions, discuss iterative algorithms for solving them, and study the capacity problem in more detail. As an application, we show that Gaussian distribution extremizes mutual information in various problems with second moment constraints.
This enthusiastic introduction to the fundamentals of information theory builds from classical Shannon theory through to modern applications in statistical learning, equipping students with a uniquely well-rounded and rigorous foundation for further study. The book introduces core topics such as data compression, channel coding, and rate-distortion theory using a unique finite blocklength approach. With over 210 end-of-part exercises and numerous examples, students are introduced to contemporary applications in statistics, machine learning, and modern communication theory. This textbook presents information-theoretic methods with applications in statistical learning and computer science, such as f-divergences, PAC-Bayes and variational principle, Kolmogorov’s metric entropy, strong data-processing inequalities, and entropic upper bounds for statistical estimation. Accompanied by additional stand-alone chapters on more specialized topics in information theory, this is the ideal introductory textbook for senior undergraduate and graduate students in electrical engineering, statistics, and computer science.
Chapter 22 is a survey of different results on fundamental limits that have been shown in the 75 years since Shannon. We discuss topics like the strong converse, channel dispersion, error exponents, and finite-blocklength bounds. In particular, the error exponents study the rate of convergence of probability of error to 0 or 1 (depending on which side of the capacity the coding rate is). Finite-blocklength results aim to prove computationally efficient bounds that give tight characterizations of non-asymptotic fundamental limits.