To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This lively introduction to measure-theoretic probability theory covers laws of large numbers, central limit theorems, random walks, martingales, Markov chains, ergodic theorems, and Brownian motion. Concentrating on results that are the most useful for applications, this comprehensive treatment is a rigorous graduate text and reference. Operating under the philosophy that the best way to learn probability is to see it in action, the book contains extended examples that apply the theory to concrete applications. This fifth edition contains a new chapter on multidimensional Brownian motion and its relationship to partial differential equations (PDEs), an advanced topic that is finding new applications. Setting the foundation for this expansion, Chapter 7 now features a proof of Itô's formula. Key exercises that previously were simply proofs left to the reader have been directly inserted into the text as lemmas. The new edition re-instates discussion about the central limit theorem for martingales and stationary sequences.
Professor Itô is one of the most distinguished probability theorists in the world, and in this modern, concise introduction to the subject he explains basic probabilistic concepts rigorously and yet gives at the same time an intuitive understanding of random phenomena. In the first chapter he considers finite situations, but from an advanced standpoint that enables the transition to greater generality to be achieved more easily. Chapter 2 deals with probability measures and includes a discussion of the fundamental concepts of probability theory. These concepts are formulated abstractly but without sacrificing intuition. The last chapter is devoted to infinite sums of independent real random variables. Each chapter is divided into sections that end with a set of problems with hints for solution. This textbook will be particularly valuable to students of mathematics taking courses in probability theory who need a modern introduction to the subject that yet does not allow overemphasis on abstractness to cloud the issues involved.
High-dimensional probability offers insight into the behavior of random vectors, random matrices, random subspaces, and objects used to quantify uncertainty in high dimensions. Drawing on ideas from probability, analysis, and geometry, it lends itself to applications in mathematics, statistics, theoretical computer science, signal processing, optimization, and more. It is the first to integrate theory, key tools, and modern applications of high-dimensional probability. Concentration inequalities form the core, and it covers both classical results such as Hoeffding's and Chernoff's inequalities and modern developments such as the matrix Bernstein's inequality. It then introduces the powerful methods based on stochastic processes, including such tools as Slepian's, Sudakov's, and Dudley's inequalities, as well as generic chaining and bounds based on VC dimension. A broad range of illustrations is embedded throughout, including classical and modern results for covariance estimation, clustering, networks, semidefinite programming, coding, dimension reduction, matrix completion, machine learning, compressed sensing, and sparse regression.