We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This extensive revision of the 2007 book 'Random Graph Dynamics,' covering the current state of mathematical research in the field, is ideal for researchers and graduate students. It considers a small number of types of graphs, primarily the configuration model and inhomogeneous random graphs. However, it investigates a wide variety of dynamics. The author describes results for the convergence to equilibrium for random walks on random graphs as well as topics that have emerged as mature research areas since the publication of the first edition, such as epidemics, the contact process, voter models, and coalescing random walk. Chapter 8 discusses a new challenging and largely uncharted direction: systems in which the graph and the states of their vertices coevolve.
A graduate-level introduction to advanced topics in Markov chain Monte Carlo (MCMC), as applied broadly in the Bayesian computational context. The topics covered have emerged as recently as the last decade and include stochastic gradient MCMC, non-reversible MCMC, continuous time MCMC, and new techniques for convergence assessment. A particular focus is on cutting-edge methods that are scalable with respect to either the amount of data, or the data dimension, motivated by the emerging high-priority application areas in machine learning and AI. Examples are woven throughout the text to demonstrate how scalable Bayesian learning methods can be implemented. This text could form the basis for a course and is sure to be an invaluable resource for researchers in the field.
Bringing together years of research into one useful resource, this text empowers the reader to creatively construct their own dependence models. Intended for senior undergraduate and postgraduate students, it takes a step-by-step look at the construction of specific dependence models, including exchangeable, Markov, moving average and, in general, spatio-temporal models. All constructions maintain a desired property of pre-specifying the marginal distribution and keeping it invariant. They do not separate the dependence from the marginals and the mechanisms followed to induce dependence are so general that they can be applied to a very large class of parametric distributions. All the constructions are based on appropriate definitions of three building blocks: prior distribution, likelihood function and posterior distribution, in a Bayesian analysis context. All results are illustrated with examples and graphical representations. Applications with data and code are interspersed throughout the book, covering fields including insurance and epidemiology.
Brownian motion is an important topic in various applied fields where the analysis of random events is necessary. Introducing Brownian motion from a statistical viewpoint, this detailed text examines the distribution of quadratic plus linear or bilinear functionals of Brownian motion and demonstrates the utility of this approach for time series analysis. It also offers the first comprehensive guide on deriving the Fredholm determinant and the resolvent associated with such statistics. Presuming only a familiarity with standard statistical theory and the basics of stochastic processes, this book brings together a set of important statistical tools in one accessible resource for researchers and graduate students. Readers also benefit from online appendices, which provide probability density graphs and solutions to the chapter problems.
An emerging field in statistics, distributional regression facilitates the modelling of the complete conditional distribution, rather than just the mean. This book introduces generalized additive models for location, scale and shape (GAMLSS) – one of the most important classes of distributional regression. Taking a broad perspective, the authors consider penalized likelihood inference, Bayesian inference, and boosting as potential ways of estimating models and illustrate their usage in complex applications. Written by the international team who developed GAMLSS, the text's focus on practical questions and problems sets it apart. Case studies demonstrate how researchers in statistics and other data-rich disciplines can use the model in their work, exploring examples ranging from fetal ultrasounds to social media performance metrics. The R code and data sets for the case studies are available on the book's companion website, allowing for replication and further study.
Complex networks are key to describing the connected nature of the society that we live in. This book, the second of two volumes, describes the local structure of random graph models for real-world networks and determines when these models have a giant component and when they are small-, and ultra-small, worlds. This is the first book to cover the theory and implications of local convergence, a crucial technique in the analysis of sparse random graphs. Suitable as a resource for researchers and PhD-level courses, it uses examples of real-world networks, such as the Internet and citation networks, as motivation for the models that are discussed, and includes exercises at the end of each chapter to develop intuition. The book closes with an extensive discussion of related models and problems that demonstratemodern approaches to network theory, such as community structure and directed models.
Providing a graduate-level introduction to discrete probability and its applications, this book develops a toolkit of essential techniques for analysing stochastic processes on graphs, other random discrete structures, and algorithms. Topics covered include the first and second moment methods, concentration inequalities, coupling and stochastic domination, martingales and potential theory, spectral methods, and branching processes. Each chapter expands on a fundamental technique, outlining common uses and showing them in action on simple examples and more substantial classical results. The focus is predominantly on non-asymptotic methods and results. All chapters provide a detailed background review section, plus exercises and signposts to the wider literature. Readers are assumed to have undergraduate-level linear algebra and basic real analysis, while prior exposure to graduate-level probability is recommended. This much-needed broad overview of discrete probability could serve as a textbook or as a reference for researchers in mathematics, statistics, data science, computer science and engineering.
Actuaries must pass exams, but more than that: they must put knowledge into practice. This coherent book supports the Society of Actuaries' short-term actuarial mathematics syllabus while emphasizing the concepts and practical application of nonlife actuarial models. A class-tested textbook for undergraduate courses in actuarial science, it is also ideal for those approaching their professional exams. Key topics covered include loss modelling, risk and ruin theory, credibility theory and applications, and empirical implementation of loss models. Revised and updated to reflect curriculum changes, this second edition includes two brand new chapters on loss reserving and ratemaking. R replaces Excel as the computation tool used throughout – the featured R code is available on the book's webpage, as are lecture slides. Numerous examples and exercises are provided, with many questions adapted from past Society of Actuaries exams.
Now in its second edition, this accessible text presents a unified Bayesian treatment of state-of-the-art filtering, smoothing, and parameter estimation algorithms for non-linear state space models. The book focuses on discrete-time state space models and carefully introduces fundamental aspects related to optimal filtering and smoothing. In particular, it covers a range of efficient non-linear Gaussian filtering and smoothing algorithms, as well as Monte Carlo-based algorithms. This updated edition features new chapters on constructing state space models of practical systems, the discretization of continuous-time state space models, Gaussian filtering by enabling approximations, posterior linearization filtering, and the corresponding smoothers. Coverage of key topics is expanded, including extended Kalman filtering and smoothing, and parameter estimation. The book's practical, algorithmic approach assumes only modest mathematical prerequisites, suitable for graduate and advanced undergraduate students. Many examples are included, with Matlab and Python code available online, enabling readers to implement algorithms in their own projects.
While the Poisson distribution is a classical statistical model for count data, the distributional model hinges on the constraining property that its mean equal its variance. This text instead introduces the Conway-Maxwell-Poisson distribution and motivates its use in developing flexible statistical methods based on its distributional form. This two-parameter model not only contains the Poisson distribution as a special case but, in its ability to account for data over- or under-dispersion, encompasses both the geometric and Bernoulli distributions. The resulting statistical methods serve in a multitude of ways, from an exploratory data analysis tool, to a flexible modeling impetus for varied statistical methods involving count data. The first comprehensive reference on the subject, this text contains numerous illustrative examples demonstrating R code and output. It is essential reading for academics in statistics and data science, as well as quantitative researchers and data analysts in economics, biostatistics and other applied disciplines.
During the past half-century, exponential families have attained a position at the center of parametric statistical inference. Theoretical advances have been matched, and more than matched, in the world of applications, where logistic regression by itself has become the go-to methodology in medical statistics, computer-based prediction algorithms, and the social sciences. This book is based on a one-semester graduate course for first year Ph.D. and advanced master's students. After presenting the basic structure of univariate and multivariate exponential families, their application to generalized linear models including logistic and Poisson regression is described in detail, emphasizing geometrical ideas, computational practice, and the analogy with ordinary linear regression. Connections are made with a variety of current statistical methodologies: missing data, survival analysis and proportional hazards, false discovery rates, bootstrapping, and empirical Bayes analysis. The book connects exponential family theory with its applications in a way that doesn't require advanced mathematical preparation.
As a result of the COVID-19 pandemic, medical statistics and public health data have become staples of newsfeeds worldwide, with infection rates, deaths, case fatality and the mysterious R figure featuring regularly. However, we don't all have the statistical background needed to translate this information into knowledge. In this lively account, Stephen Senn explains these statistical phenomena and demonstrates how statistics is essential to making rational decisions about medical care. The second edition has been thoroughly updated to cover developments of the last two decades and includes a new chapter on medical statistical challenges of COVID-19, along with additional material on infectious disease modelling and representation of women in clinical trials. Senn entertains with anecdotes, puzzles and paradoxes, while tackling big themes including: clinical trials and the development of medicines, life tables, vaccines and their risks or lack of them, smoking and lung cancer, and even the power of prayer.
This well-balanced introduction to enterprise risk management integrates quantitative and qualitative approaches and motivates key mathematical and statistical methods with abundant real-world cases - both successes and failures. Worked examples and end-of-chapter exercises support readers in consolidating what they learn. The mathematical level, which is suitable for graduate and senior undergraduate students in quantitative programs, is pitched to give readers a solid understanding of the concepts and principles involved, without diving too deeply into more complex theory. To reveal the connections between different topics, and their relevance to the real world, the presentation has a coherent narrative flow, from risk governance, through risk identification, risk modelling, and risk mitigation, capped off with holistic topics - regulation, behavioural biases, and crisis management - that influence the whole structure of ERM. The result is a text and reference that is ideal for graduate and senior undergraduate students, risk managers in industry, and anyone preparing for ERM actuarial exams.
This compact course is written for the mathematically literate reader who wants to learn to analyze data in a principled fashion. The language of mathematics enables clear exposition that can go quite deep, quite quickly, and naturally supports an axiomatic and inductive approach to data analysis. Starting with a good grounding in probability, the reader moves to statistical inference via topics of great practical importance – simulation and sampling, as well as experimental design and data collection – that are typically displaced from introductory accounts. The core of the book then covers both standard methods and such advanced topics as multiple testing, meta-analysis, and causal inference.
Heavy tails –extreme events or values more common than expected –emerge everywhere: the economy, natural events, and social and information networks are just a few examples. Yet after decades of progress, they are still treated as mysterious, surprising, and even controversial, primarily because the necessary mathematical models and statistical methods are not widely known. This book, for the first time, provides a rigorous introduction to heavy-tailed distributions accessible to anyone who knows elementary probability. It tackles and tames the zoo of terminology for models and properties, demystifying topics such as the generalized central limit theorem and regular variation. It tracks the natural emergence of heavy-tailed distributions from a wide variety of general processes, building intuition. And it reveals the controversy surrounding heavy tails to be the result of flawed statistics, then equips readers to identify and estimate with confidence. Over 100 exercises complete this engaging package.
Fay and Brittain present statistical hypothesis testing and compatible confidence intervals, focusing on application and proper interpretation. The emphasis is on equipping applied statisticians with enough tools - and advice on choosing among them - to find reasonable methods for almost any problem and enough theory to tackle new problems by modifying existing methods. After covering the basic mathematical theory and scientific principles, tests and confidence intervals are developed for specific types of data. Essential methods for applications are covered, such as general procedures for creating tests (e.g., likelihood ratio, bootstrap, permutation, testing from models), adjustments for multiple testing, clustering, stratification, causality, censoring, missing data, group sequential tests, and non-inferiority tests. New methods developed by the authors are included throughout, such as melded confidence intervals for comparing two samples and confidence intervals associated with Wilcoxon-Mann-Whitney tests and Kaplan-Meier estimates. Examples, exercises, and the R package asht support practical use.
Stable Lévy processes lie at the intersection of Lévy processes and self-similar Markov processes. Processes in the latter class enjoy a Lamperti-type representation as the space-time path transformation of so-called Markov additive processes (MAPs). This completely new mathematical treatment takes advantage of the fact that the underlying MAP for stable processes can be explicitly described in one dimension and semi-explicitly described in higher dimensions, and uses this approach to catalogue a large number of explicit results describing the path fluctuations of stable Lévy processes in one and higher dimensions. Written for graduate students and researchers in the field, this book systemically establishes many classical results as well as presenting many recent results appearing in the last decade, including previously unpublished material. Topics explored include first hitting laws for a variety of sets, path conditionings, law-preserving path transformations, the distribution of extremal points, growth envelopes and winding behaviour.
The twenty-first century has seen a breathtaking expansion of statistical methodology, both in scope and influence. 'Data science' and 'machine learning' have become familiar terms in the news, as statistical methods are brought to bear upon the enormous data sets of modern science and commerce. How did we get here? And where are we going? How does it all fit together? Now in paperback and fortified with exercises, this book delivers a concentrated course in modern statistical thinking. Beginning with classical inferential theories - Bayesian, frequentist, Fisherian - individual chapters take up a series of influential topics: survival analysis, logistic regression, empirical Bayes, the jackknife and bootstrap, random forests, neural networks, Markov Chain Monte Carlo, inference after model selection, and dozens more. The distinctly modern approach integrates methodology and algorithms with statistical inference. Each chapter ends with class-tested exercises, and the book concludes with speculation on the future direction of statistics and data science.
Applications of queueing network models have multiplied in the last generation, including scheduling of large manufacturing systems, control of patient flow in health systems, load balancing in cloud computing, and matching in ride sharing. These problems are too large and complex for exact solution, but their scale allows approximation. This book is the first comprehensive treatment of fluid scaling, diffusion scaling, and many-server scaling in a single text presented at a level suitable for graduate students. Fluid scaling is used to verify stability, in particular treating max weight policies, and to study optimal control of transient queueing networks. Diffusion scaling is used to control systems in balanced heavy traffic, by solving for optimal scheduling, admission control, and routing in Brownian networks. Many-server scaling is studied in the quality and efficiency driven Halfin–Whitt regime and applied to load balancing in the supermarket model and to bipartite matching in ride-sharing applications.
In nonparametric and high-dimensional statistical models, the classical Gauss–Fisher–Le Cam theory of the optimality of maximum likelihood estimators and Bayesian posterior inference does not apply, and new foundations and ideas have been developed in the past several decades. This book gives a coherent account of the statistical theory in infinite-dimensional parameter spaces. The mathematical foundations include self-contained 'mini-courses' on the theory of Gaussian and empirical processes, approximation and wavelet theory, and the basic theory of function spaces. The theory of statistical inference in such models - hypothesis testing, estimation and confidence sets - is presented within the minimax paradigm of decision theory. This includes the basic theory of convolution kernel and projection estimation, but also Bayesian nonparametrics and nonparametric maximum likelihood estimation. In a final chapter the theory of adaptive inference in nonparametric models is developed, including Lepski's method, wavelet thresholding, and adaptive inference for self-similar functions. Winner of the 2017 PROSE Award for Mathematics.