To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The hard-core model has as its configurations the independent sets of some graph instance $G$. The probability distribution on independent sets is controlled by a ‘fugacity’ $\lambda \gt 0$, with higher $\lambda$ leading to denser configurations. We investigate the mixing time of Glauber (single-site) dynamics for the hard-core model on restricted classes of bounded-degree graphs in which a particular graph $H$ is excluded as an induced subgraph. If $H$ is a subdivided claw then, for all $\lambda$, the mixing time is $O(n\log n)$, where $n$ is the order of $G$. This extends a result of Chen and Gu for claw-free graphs. When $H$ is a path, the set of possible instances is finite. For all other $H$, the mixing time is exponential in $n$ for sufficiently large $\lambda$, depending on $H$ and the maximum degree of $G$.
We consider the problem of sequential matching in a stochastic block model with several classes of nodes and generic compatibility constraints. When the probabilities of connections do not scale with the size of the graph, we show that under the Ncond condition, a simple max-weight type policy allows us to attain an asymptotically perfect matching while no sequential algorithm attains perfect matching otherwise. The proof relies on a specific Markovian representation of the dynamics associated with Lyapunov techniques.
The gambler’s ruin problem for correlated random walks (CRWs), both with and without delays, is addressed using the optional stopping theorem for martingales. We derive closed-form expressions for the ruin probabilities and the expected game duration for CRWs with increments $\{1,-1\}$ and for symmetric CRWs with increments $\{1,0,-1\}$ (CRWs with delays). Additionally, a martingale technique is developed for general CRWs with delays. The gambler’s ruin probability for a game involving bets on two arbitrary patterns is also examined.
In their celebrated paper [CLR10], Caputo, Liggett and Richthammer proved Aldous’ conjecture and showed that for an arbitrary finite graph, the spectral gap of the interchange process is equal to the spectral gap of the underlying random walk. A crucial ingredient in the proof was the Octopus Inequality — a certain inequality of operators in the group ring $\mathbb{R}\left[{\mathrm{Sym}}_{n}\right]$ of the symmetric group. Here we generalise the Octopus Inequality and apply it to generalising the Caputo–Liggett–Richthammer Theorem to certain hypergraphs, proving some cases of a conjecture of Caputo.
We show that for $\lambda\in[0,{m_1}/({1+\sqrt{1-{1}/{m_1}}})]$, the biased random walk’s speed on a Galton–Watson tree without leaves is strictly decreasing, where $m_1\geq 2$. Our result extends the monotonic interval of the speed on a Galton–Watson tree.
This paper characterizes irreducible phase-type representations for exponential distributions. Bean and Green (2000) gave a set of necessary and sufficient conditions for a phase-type distribution with an irreducible generator matrix to be exponential. We extend these conditions to irreducible representations, and we thus give a characterization of all irreducible phase-type representations for exponential distributions. We consider the results in relation to time-reversal of phase-type distributions, PH-simplicity, and the algebraic degree of a phase-type distribution, and we give applications of the results. In particular we give the conditions under which a Coxian distribution becomes exponential, and we construct bivariate exponential distributions. Finally, we translate the main findings to the discrete case of geometric distributions.
We consider the hard-core model on a finite square grid graph with stochastic Glauber dynamics parametrized by the inverse temperature $\beta$. We investigate how the transition between its two maximum-occupancy configurations takes place in the low-temperature regime $\beta \to \infty$ in the case of periodic boundary conditions. The hard-core constraints and the grid symmetry make the structure of the critical configurations for this transition, also known as essential saddles, very rich and complex. We provide a comprehensive geometrical characterization of these configurations that together constitute a bottleneck for the Glauber dynamics in the low-temperature limit. In particular, we develop a novel isoperimetric inequality for hard-core configurations with a fixed number of particles and show how the essential saddles are characterized not only by the number of particles but also their geometry.
The embedding problem of Markov chains examines whether a stochastic matrix$\mathbf{P} $ can arise as the transition matrix from time 0 to time 1 of a continuous-time Markov chain. When the chain is homogeneous, it checks if $ \mathbf{P}=\exp{\mathbf{Q}}$ for a rate matrix $ \mathbf{Q}$ with zero row sums and non-negative off-diagonal elements, called a Markov generator. It is known that a Markov generator may not always exist or be unique. This paper addresses finding $ \mathbf{Q}$, assuming that the process has at most one jump per unit time interval, and focuses on the problem of aligning the conditional one-jump transition matrix from time 0 to time 1 with $ \mathbf{P}$. We derive a formula for this matrix in terms of $ \mathbf{Q}$ and establish that for any $ \mathbf{P}$ with non-zero diagonal entries, a unique $ \mathbf{Q}$, called the ${\unicode{x1D7D9}}$-generator, exists. We compare the ${\unicode{x1D7D9}}$-generator with the one-jump rate matrix from Jarrow, Lando, and Turnbull (1997), showing which is a better approximate Markov generator of $ \mathbf{P}$ in some practical cases.
We consider the performance of Glauber dynamics for the random cluster model with real parameter $q\gt 1$ and temperature $\beta \gt 0$. Recent work by Helmuth, Jenssen, and Perkins detailed the ordered/disordered transition of the model on random $\Delta$-regular graphs for all sufficiently large $q$ and obtained an efficient sampling algorithm for all temperatures $\beta$ using cluster expansion methods. Despite this major progress, the performance of natural Markov chains, including Glauber dynamics, is not yet well understood on the random regular graph, partly because of the non-local nature of the model (especially at low temperatures) and partly because of severe bottleneck phenomena that emerge in a window around the ordered/disordered transition. Nevertheless, it is widely conjectured that the bottleneck phenomena that impede mixing from worst-case starting configurations can be avoided by initialising the chain more judiciously. Our main result establishes this conjecture for all sufficiently large $q$ (with respect to $\Delta$). Specifically, we consider the mixing time of Glauber dynamics initialised from the two extreme configurations, the all-in and all-out, and obtain a pair of fast mixing bounds which cover all temperatures $\beta$, including in particular the bottleneck window. Our result is inspired by the recent approach of Gheissari and Sinclair for the Ising model who obtained a similar flavoured mixing-time bound on the random regular graph for sufficiently low temperatures. To cover all temperatures in the RC model, we refine appropriately the structural results of Helmuth, Jenssen and Perkins about the ordered/disordered transition and show spatial mixing properties ‘within the phase’, which are then related to the evolution of the chain.
We study the mixing time of the single-site update Markov chain, known as the Glauber dynamics, for generating a random independent set of a tree. Our focus is obtaining optimal convergence results for arbitrary trees. We consider the more general problem of sampling from the Gibbs distribution in the hard-core model where independent sets are weighted by a parameter $\lambda \gt 0$; the special case $\lambda =1$ corresponds to the uniform distribution over all independent sets. Previous work of Martinelli, Sinclair and Weitz (2004) obtained optimal mixing time bounds for the complete $\Delta$-regular tree for all $\lambda$. However, Restrepo, Stefankovic, Vera, Vigoda, and Yang (2014) showed that for sufficiently large $\lambda$ there are bounded-degree trees where optimal mixing does not hold. Recent work of Eppstein and Frishberg (2022) proved a polynomial mixing time bound for the Glauber dynamics for arbitrary trees, and more generally for graphs of bounded tree-width.
We establish an optimal bound on the relaxation time (i.e., inverse spectral gap) of $O(n)$ for the Glauber dynamics for unweighted independent sets on arbitrary trees. We stress that our results hold for arbitrary trees and there is no dependence on the maximum degree $\Delta$. Interestingly, our results extend (far) beyond the uniqueness threshold which is on the order $\lambda =O(1/\Delta )$. Our proof approach is inspired by recent work on spectral independence. In fact, we prove that spectral independence holds with a constant independent of the maximum degree for any tree, but this does not imply mixing for general trees as the optimal mixing results of Chen, Liu, and Vigoda (2021) only apply for bounded-degree graphs. We instead utilize the combinatorial nature of independent sets to directly prove approximate tensorization of variance via a non-trivial inductive proof.
We consider linear-fractional branching processes (one-type and two-type) with immigration in varying environments. For $n\ge0$, let $Z_n$ count the number of individuals of the nth generation, which excludes the immigrant who enters the system at time n. We call n a regeneration time if $Z_n=0$. For both the one-type and two-type cases, we give criteria for the finiteness or infiniteness of the number of regeneration times. We then construct some concrete examples to exhibit the strange phenomena caused by the so-called varying environments. For example, it may happen that the process is extinct, but there are only finitely many regeneration times. We also study the asymptotics of the number of regeneration times of the model in the example.
For a partially specified stochastic matrix, we consider the problem of completing it so as to minimize Kemeny’s constant. We prove that for any partially specified stochastic matrix for which the problem is well defined, there is a minimizing completion that is as sparse as possible. We also find the minimum value of Kemeny’s constant in two special cases: when the diagonal has been specified and when all specified entries lie in a common row.
We review criteria for comparing the efficiency of Markov chain Monte Carlo (MCMC) methods with respect to the asymptotic variance of estimates of expectations of functions of state, and show how such criteria can justify ways of combining improvements to MCMC methods. We say that a chain on a finite state space with transition matrix P efficiency-dominates one with transition matrix Q if for every function of state it has lower (or equal) asymptotic variance. We give elementary proofs of some previous results regarding efficiency dominance, leading to a self-contained demonstration that a reversible chain with transition matrix P efficiency-dominates a reversible chain with transition matrix Q if and only if none of the eigenvalues of $Q-P$ are negative. This allows us to conclude that modifying a reversible MCMC method to improve its efficiency will also improve the efficiency of a method that randomly chooses either this or some other reversible method, and to conclude that improving the efficiency of a reversible update for one component of state (as in Gibbs sampling) will improve the overall efficiency of a reversible method that combines this and other updates. It also explains how antithetic MCMC can be more efficient than independent and identically distributed sampling. We also establish conditions that can guarantee that a method is not efficiency-dominated by any other method.
We consider a Markov control model with Borel state space, metric compact action space, and transitions assumed to have a density function with respect to some probability measure satisfying some continuity conditions. We study the optimization problem of maximizing the probability of visiting some subset of the state space infinitely often, and we show that there exists an optimal stationary Markov policy for this problem. We endow the set of stationary Markov policies and the family of strategic probability measures with adequate topologies (namely, the narrow topology for Young measures and the $ws^\infty$-topology, respectively) to obtain compactness and continuity properties, which allow us to obtain our main results.
Eaton (1992) considered a general parametric statistical model paired with an improper prior distribution for the parameter and proved that if a certain Markov chain, constructed using the model and the prior, is recurrent, then the improper prior is strongly admissible, which (roughly speaking) means that the generalized Bayes estimators derived from the corresponding posterior distribution are admissible. Hobert and Robert (1999) proved that Eaton’s Markov chain is recurrent if and only if its so-called conjugate Markov chain is recurrent. The focus of this paper is a family of Markov chains that contains all of the conjugate chains that arise in the context of a Poisson model paired with an arbitrary improper prior for the mean parameter. Sufficient conditions for recurrence and transience are developed and these are used to establish new results concerning the strong admissibility of non-conjugate improper priors for the Poisson mean.
In this paper, we introduce a slight variation of the dominated-coupling-from-the-past (DCFTP) algorithm of Kendall, for bounded Markov chains. It is based on the control of a (typically non-monotonic) stochastic recursion by another (typically monotonic) one. We show that this algorithm is particularly suitable for stochastic matching models with bounded patience, a class of models for which the steady-state distribution of the system is in general unknown in closed form. We first show that the Markov chain of this model can easily be controlled by an infinite-server queue. We then investigate the particular case where patience times are deterministic, and this control argument may fail. In that case we resort to an ad-hoc technique that can also be seen as a control (this time, by the arrival sequence). We then compare this algorithm to the primitive coupling-from-the-past (CFTP) algorithm and to control by an infinite-server queue, and show how our perfect simulation results can be used to estimate and compare, for instance, the loss probabilities of various systems in equilibrium.
The purpose of this study is to present a subgeometric convergence formula for the stationary distribution of the finite-level M/G/1-type Markov chain when taking its infinite-level limit, where the upper boundary level goes to infinity. This study is carried out using the fundamental deviation matrix, which is a block-decomposition-friendly solution to the Poisson equation of the deviation matrix. The fundamental deviation matrix provides a difference formula for the respective stationary distributions of the finite-level chain and the corresponding infinite-level chain. The difference formula plays a crucial role in the derivation of the main result of this paper, and the main result is used, for example, to derive an asymptotic formula for the loss probability in the MAP/GI/1/N queue.
Inaccuracy and information measures based on cumulative residual entropy are quite useful and have received considerable attention in many fields, such as statistics, probability, and reliability theory. In particular, many authors have studied cumulative residual inaccuracy between coherent systems based on system lifetimes. In a previous paper (Bueno and Balakrishnan, Prob. Eng. Inf. Sci.36, 2022), we discussed a cumulative residual inaccuracy measure for coherent systems at component level, that is, based on the common, stochastically dependent component lifetimes observed under a non-homogeneous Poisson process. In this paper, using a point process martingale approach, we extend this concept to a cumulative residual inaccuracy measure between non-explosive point processes and then specialize the results to Markov occurrence times. If the processes satisfy the proportional risk hazard process property, then the measure determines the Markov chain uniquely. Several examples are presented, including birth-and-death processes and pure birth process, and then the results are applied to coherent systems at component level subject to Markov failure and repair processes.
For an n-element subset U of $\mathbb {Z}^2$, select x from U according to harmonic measure from infinity, remove x from U and start a random walk from x. If the walk leaves from y when it first enters the rest of U, add y to it. Iterating this procedure constitutes the process we call harmonic activation and transport (HAT).
HAT exhibits a phenomenon we refer to as collapse: Informally, the diameter shrinks to its logarithm over a number of steps which is comparable to this logarithm. Collapse implies the existence of the stationary distribution of HAT, where configurations are viewed up to translation, and the exponential tightness of diameter at stationarity. Additionally, collapse produces a renewal structure with which we establish that the center of mass process, properly rescaled, converges in distribution to two-dimensional Brownian motion.
To characterize the phenomenon of collapse, we address fundamental questions about the extremal behavior of harmonic measure and escape probabilities. Among n-element subsets of $\mathbb {Z}^2$, what is the least positive value of harmonic measure? What is the probability of escape from the set to a distance of, say, d? Concerning the former, examples abound for which the harmonic measure is exponentially small in n. We prove that it can be no smaller than exponential in $n \log n$. Regarding the latter, the escape probability is at most the reciprocal of $\log d$, up to a constant factor. We prove it is always at least this much, up to an n-dependent factor.