We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
Online ordering will be unavailable from 17:00 GMT on Friday, April 25 until 17:00 GMT on Sunday, April 27 due to maintenance. We apologise for the inconvenience.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In Chapter 6 we present a general approach relying on the diffusion approximation to prove renewal theorems for Markov chains, so we consider Markov chains which may be approximated by a diffusion process. For a transient Markov chain with asymptotically zero drift, the average time spent by the chain in a unit interval is, roughly speaking, the reciprocal of the drift.
We apply a martingale-type technique and show that the asymptotic behaviour of the renewal measure depends heavily on the rate at which the drift vanishes. As in the last two chapters, two main cases are distinguished, either the drift of the chain decreases as 1/x or much more slowly than that. In contrast with the case of an asymptotically positive drift considered in Chapter 10, the case of vanishing drift is quite tricky to analyse since the Markov chain tends to infinity rather slowly.
In Chapter 3 we consider (right) transient Markov chains taking values in R. We are interested in down-crossing probabilities for them. These clearly depend on the asymptotic properties of the chain drift at infinity.
In Chapter 9 we consider a recurrent Markov chain possessing an invariant measure which is either probabilistic in the case of positive recurrence or σ-finite in the case of null recurrence. Our main aim here is to describe the asymptotic behaviour of the invariant distribution tail for a class of Markov chains with asymptotically zero drift going to zero more slowly than 1/x. We start with a result which states that a typical stationary Markov chain with asymptotically zero drift always generates a heavy-tailed invariant distribution which is very different from the case of Markov chains with asymptotically negative drift bounded away from zero. Then we develop techniques needed for deriving precise tail asymptotics of Weibullian type.
The main goal of Chapter 11 is to demonstrate how the theory developed in the previous chapters can be used in the study of various Markov models that give rise to Markov chains with asymptotically zero drift. Some of those models are popular in stochastic modelling: random walks conditioned to stay positive, state-dependent branching processes or branching processes with migration, stochastic difference equations. In contrast with the general approach discussed here, the methods available in the literature for investigation of these models are mostly model tailored. We also introduce some new models to which our approach is applicable. For example, we introduce a risk process with surplus-dependent premium rate, which converges to the critical threshold in the nett-profit condition. Furthermore, we introduce a new class of branching processes with migration and with state-dependent offspring distributions.
In Chapter 8 we consider a recurrent Markov chain possessing an invariant measure which is either probabilistic in the case of positive recurrence or σ-finite in the case of null recurrence. Our main aim here is to describe the asymptotic behaviour of the invariant distribution tail for a class of Markov chains with asymptotically zero drift proportional to 1/x. We start with a result which states that a typical stationary Markov chain with asymptotically zero drift always generates a heavy-tailed invariant distribution, which is very different from the case of Markov chains with asymptotically negative drift bounded away from zero. Then we develop techniques needed for deriving precise tail asymptotics of power type.
In Introduction we mostly discuss nearest neighbour Markov chains which represent one of the two classes of Markov chains whose either invariant measure in the case of positive recurrence or Green function in the case of transience is available in closed form. Closed form makes possible direct analysis of such Markov chains: classification, tail asymptotics of the invariant probabilities or Green function. This discussion sheds some light on what we may expect for general Markov chains. Another class is provided by diffusion processes which are also discussed in Introduction.
Chapters 4 and 5 of the present monograph deal comprehensively with limit theorems for transient Markov chains. In Chapter 4 we consider drifts of order 1/x, and prove limit theorems including convergence to a Γ-distribution and functional convergence to a Bessel process. We also study the asymptotic behaviour of the renewal measure, which is not straightforward as there is no law of large numbers owing to the comparable contributions of the drift and fluctuations.
In Chapter 10 we consider Markov chains with asymptotically constant (non-zero) drift. As shown in the previous chapter, the more slowly they to zero, the higher are the moments that should behave regularly at infinity. This is needed to make it possible to describe the asymptotic tail behaviour of the invariant measure. Therefore, it is not surprising that in the case of an asymptotically negative drift bounded away from zero we need to assume that the distribution of jumps converges weakly at infinity. This corresponds, roughly speaking, to the assumption that all moments behave regularly at infinity. In this chapter we slightly extend the notion of an asymptotically homogeneous Markov chain by allowing extended limiting random variables.
In Chapter 2 we introduce a classification of Markov chains with asymptotically zero drift, which relies on relations between the drift and the second moment of jumps, with many improvements on the results known in the literature. Additional assumptions are expressed in terms of truncated moments of higher orders and tail probabilities of jumps. Another, more important, contrast with previous results on recurrence/transience is the fact that we do not use concrete Lyapunov test functions (quadratic or similar). Instead, we construct an abstract Lyapunov function which is motivated by the harmonic function of a diffusion process with the same drift and diffusion coefficient.
Chapters 4 and 5 of the present monograph deal comprehensively with limit theorems for transient Markov chains. In Chapter 5 we consider drifts decreasing more slowly than 1/x and prove limit theorems including weak and strong laws of large numbers, convergence to normal distribution, functional convergence to Brownian motion, and asymptotic behaviour of the renewal measure.
Chapter 7 is the most conceptual part of the book. Our purpose here is to describe, without superfluous details, a change of measure strategy which allows us to transform a recurrent chain into a transient one, and vice versa. It is motivated by the exponential change of measure technique which goes back to Cramer. In the context of large deviations in collective risk theory, this technique allows us to transform a negatively drifted random walk into one with positive drift. Doob’s h-transform is the most natural substitute for an exponential change of measure in the context of Lamperti’s problem, that is, in the context of Markov chains with asymptotically zero drift.
Such transformations connect naturally previous chapters on asymptotic behaviour of transient chains with subsequent chapters, which are devoted to recurrent chains. A very important, in comparison with the classical Doob’s h-transform, the novelty consists in the fact that we use weight functions which are not necessarily harmonic, they are only asymptotically harmonic at infinity. The main challenge is to identify such functions under various drift scenarios.
This text examines Markov chains whose drift tends to zero at infinity, a topic sometimes labelled as 'Lamperti's problem'. It can be considered a subcategory of random walks, which are helpful in studying stochastic models like branching processes and queueing systems. Drawing on Doob's h-transform and other tools, the authors present novel results and techniques, including a change-of-measure technique for near-critical Markov chains. The final chapter presents a range of applications where these special types of Markov chains occur naturally, featuring a new risk process with surplus-dependent premium rate. This will be a valuable resource for researchers and graduate students working in probability theory and stochastic processes.
Convolutions of long-tailed and subexponential distributions play a major role in the analysis of many stochastic systems. We study these convolutions, proving some important new results through a simple and coherent approach, and also showing that the standard properties of such convolutions follow as easy consequences.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.