We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper we provide an introduction to statistical inference for the classical linear birth‒death process, focusing on computational aspects of the problem in the setting of discretely observed processes. The basic probabilistic properties are given in Section 2, focusing on computation of the transition functions. This is followed by a brief discussion of simulation methods in Section 3, and of frequentist methods in Section 4. Section 5 is devoted to Bayesian methods, from rejection sampling to Markov chain Monte Carlo and approximate Bayesian computation. In Section 6 we consider the time-inhomogeneous case. The paper ends with a brief discussion in Section 7.
In this paper we study the Robbins–Monro procedure Xn+1 = Xn - an-1Yn with some fixed number a > 0 and establish the moderate deviation principle of the process {Xn}.
In this paper we present results on the concentration properties of the smoothing and filtering distributions of some partially observed chaotic dynamical systems. We show that, rather surprisingly, for the geometric model of the Lorenz equations, as well as some other chaotic dynamical systems, the smoothing and filtering distributions do not concentrate around the true position of the signal, as the number of observations tends to ∞. Instead, under various assumptions on the observation noise, we show that the expected value of the diameter of the support of the smoothing and filtering distributions remains lower bounded by a constant multiplied by the standard deviation of the noise, independently of the number of observations. Conversely, under rather general conditions, the diameter of the support of the smoothing and filtering distributions are upper bounded by a constant multiplied by the standard deviation of the noise. To some extent, applications to the three-dimensional Lorenz 63 model and to the Lorenz 96 model of arbitrarily large dimension are considered.
A prevalent problem in general state-space models is the approximation of the smoothing distribution of a state conditional on the observations from the past, the present, and the future. The aim of this paper is to provide a rigorous analysis of such approximations of smoothed distributions provided by the two-filter algorithms. We extend the results available for the approximation of smoothing distributions to these two-filter approaches which combine a forward filter approximating the filtering distributions with a backward information filter approximating a quantity proportional to the posterior distribution of the state, given future observations.
Multiplicative noise removal is a challenging problem in image restoration. In this paper, by applying Box-Cox transformation, we convert the multiplicative noise removal problem into the additive noise removal problem and the block matching three dimensional (BM3D) method is applied to get the final recovered image. Indeed, BM3D is an effective method to remove additive Gaussian white noise in images. A maximum likelihood method is designed to determine the parameter in the Box-Cox transformation. We also present the unbiased inverse transform for the Box-Cox transformation which is important. Both theoretical analysis and experimental results illustrate clearly that the proposed method can remove multiplicative noise very well especially when multiplicative noise is heavy. The proposed method is superior to the existing methods for multiplicative noise removal in the literature.
Importance sampling has become an important tool for the computation of extreme quantiles and tail-based risk measures. For estimation of such nonlinear functionals of the underlying distribution, the standard efficiency analysis is not necessarily applicable. In this paper we therefore study importance sampling algorithms by considering moderate deviations of the associated weighted empirical processes. Using a delta method for large deviations, combined with classical large deviation techniques, the moderate deviation principle is obtained for importance sampling estimators of two of the most common risk measures: value at risk and expected shortfall.
We derive some limit theorems associated with the Ewens sampling formula when its parameter is increasing together with a sample size.
Moreover, the limit results are applied in order to investigate asymptotic properties of the maximum likelihood estimator.
In the paper we consider the following modification of a discrete-time branching process with stationary immigration. In each generation a binomially distributed subset of the population will be observed. The number of observed individuals constitute a partially observed branching process. After inspection both observed and unobserved individuals may change their offspring distributions. In the subcritical case we investigate the possibility of using the known estimators for the offspring mean and for the mean of the stationary-limiting distribution of the process when the observation of the population sizes is restricted. We prove that, if both the population and the number of immigrants are partially observed, the estimators are still strongly consistent. We also prove that the `skipped' version of the estimator for the offspring mean is asymptotically normal and the estimator of the stationary distribution's mean is asymptotically normal under additional assumptions.
An urn contains black and red balls. Let Zn be the proportion of black balls at time n and 0≤L<U≤1 random barriers. At each time n, a ball bn is drawn. If bn is black and Zn-1<U, then bn is replaced together with a random number Bn of black balls. If bn is red and Zn-1>L, then bn is replaced together with a random number Rn of red balls. Otherwise, no additional balls are added, and bn alone is replaced. In this paper we assume that Rn=Bn. Then, under mild conditions, it is shown that Zn→a.s.Z for some random variable Z, and Dn≔√n(Zn-Z)→𝒩(0,σ2) conditionally almost surely (a.s.), where σ2 is a certain random variance. Almost sure conditional convergence means that ℙ(Dn∈⋅|𝒢n)→w 𝒩(0,σ2) a.s., where ℙ(Dn∈⋅|𝒢n) is a regular version of the conditional distribution of Dn given the past 𝒢n. Thus, in particular, one obtains Dn→𝒩(0,σ2) stably. It is also shown that L<Z<U a.s. and Z has nonatomic distribution.
We connect known results about diffusion limits of Markov chain Monte Carlo (MCMC) algorithms to the computer science notion of algorithm complexity. Our main result states that any weak limit of a Markov process implies a corresponding complexity bound (in an appropriate metric). We then combine this result with previously-known MCMC diffusion limit results to prove that under appropriate assumptions, the random-walk Metropolis algorithm in d dimensions takes O(d) iterations to converge to stationarity, while the Metropolis-adjusted Langevin algorithm takes O(d1/3) iterations to converge to stationarity.
We study the asymptotic behavior of a new particle filter approach for the estimation of hidden Markov models. In particular, we develop an algorithm where the latent-state sequence is segmented into multiple shorter portions, with an estimation technique based upon a separate particle filter in each portion. The partitioning facilitates the use of parallel processing, which reduces the wall-clock computational time. Based upon this approach, we introduce new estimators of the latent states and likelihood which have similar or better variance properties compared to estimators derived from standard particle filters. We show that the likelihood function estimator is unbiased, and show asymptotic normality of the underlying estimators.