We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We study the Wiener disorder detection problem where each observation is associated with a positive cost. In this setting, a strategy is a pair consisting of a sequence of observation times and a stopping time corresponding to the declaration of disorder. We characterize the minimal cost of the disorder problem with costly observations as the unique fixed point of a certain jump operator, and we determine the optimal strategy.
Evaluative studies of inductive inferences have been pursued extensively with mathematical rigor in many disciplines, such as statistics, econometrics, computer science, and formal epistemology. Attempts have been made in those disciplines to justify many different kinds of inductive inferences, to varying extents. But somehow those disciplines have said almost nothing to justify a most familiar kind of induction, an example of which is this: “We’ve seen this many ravens and they all are black, so all ravens are black.” This is enumerative induction in its full strength. For it does not settle with a weaker conclusion (such as “the ravens observed in the future will all be black”); nor does it proceed with any additional premise (such as the statistical IID assumption). The goal of this paper is to take some initial steps toward a justification for the full version of enumerative induction, against counterinduction, and against the skeptical policy. The idea is to explore various epistemic ideals, mathematically defined as different modes of convergence to the truth, and look for one that is weak enough to be achievable and strong enough to justify a norm that governs both the long run and the short run. So the proposal is learning-theoretic in essence, but a Bayesian version is developed as well.
Pitman (2003), and subsequently Gnedin and Pitman (2006), showed that a large class of random partitions of the integers derived from a stable subordinator of index $\alpha\in(0,1)$ have infinite Gibbs (product) structure as a characterizing feature. The most notable case are random partitions derived from the two-parameter Poisson–Dirichlet distribution, $\textrm{PD}(\alpha,\theta)$, whose corresponding $\alpha$-diversity/local time have generalized Mittag–Leffler distributions, denoted by $\textrm{ML}(\alpha,\theta)$. Our aim in this work is to provide indications on the utility of the wider class of Gibbs partitions as it relates to a study of Riemann–Liouville fractional integrals and size-biased sampling, and in decompositions of special functions, and its potential use in the understanding of various constructions of more exotic processes. We provide characterizations of general laws associated with nested families of $\textrm{PD}(\alpha,\theta)$ mass partitions that are constructed from fragmentation operations described in Dong et al. (2014). These operations are known to be related in distribution to various constructions of discrete random trees/graphs in [n], and their scaling limits. A centerpiece of our work is results related to Mittag–Leffler functions, which play a key role in fractional calculus and are otherwise Laplace transforms of the $\textrm{ML}(\alpha,\theta)$ variables. Notably, this leads to an interpretation within the context of $\textrm{PD}(\alpha,\theta)$ laws conditioned on Poisson point process counts over intervals of scaled lengths of the $\alpha$-diversity.
In this article we prove new central limit theorems (CLTs) for several coupled particle filters (CPFs). CPFs are used for the sequential estimation of the difference of expectations with respect to filters which are in some sense close. Examples include the estimation of the filtering distribution associated to different parameters (finite difference estimation) and filters associated to partially observed discretized diffusion processes (PODDP) and the implementation of the multilevel Monte Carlo (MLMC) identity. We develop new theory for CPFs, and based upon several results, we propose a new CPF which approximates the maximal coupling (MCPF) of a pair of predictor distributions. In the context of ML estimation associated to PODDP with time-discretization
$\Delta_l=2^{-l}$
,
$l\in\{0,1,\dots\}$
, we show that the MCPF and the approach of Jasra, Ballesio, et al. (2018) have, under certain assumptions, an asymptotic variance that is bounded above by an expression that is of (almost) the order of
$\Delta_l$
(
$\mathcal{O}(\Delta_l)$
), uniformly in time. The
$\mathcal{O}(\Delta_l)$
bound preserves the so-called forward rate of the diffusion in some scenarios, which is not the case for the CPF in Jasra et al. (2017).
Both sequential Monte Carlo (SMC) methods (a.k.a. ‘particle filters’) and sequential Markov chain Monte Carlo (sequential MCMC) methods constitute classes of algorithms which can be used to approximate expectations with respect to (a sequence of) probability distributions and their normalising constants. While SMC methods sample particles conditionally independently at each time step, sequential MCMC methods sample particles according to a Markov chain Monte Carlo (MCMC) kernel. Introduced over twenty years ago in [6], sequential MCMC methods have attracted renewed interest recently as they empirically outperform SMC methods in some applications. We establish an
$\mathbb{L}_r$
-inequality (which implies a strong law of large numbers) and a central limit theorem for sequential MCMC methods and provide conditions under which errors can be controlled uniformly in time. In the context of state-space models, we also provide conditions under which sequential MCMC methods can indeed outperform standard SMC methods in terms of asymptotic variance of the corresponding Monte Carlo estimators.
Let
$X_1, X_2,\dots$
be a short-memory linear process of random variables. For
$1\leq q<2$
, let
${\mathcal{F}}$
be a bounded set of real-valued functions on [0, 1] with finite q-variation. It is proved that
$\{n^{-1/2}\sum_{i=1}^nX_i\,f(i/n)\colon f\in{\mathcal{F}}\}$
converges in outer distribution in the Banach space of bounded functions on
${\mathcal{F}}$
as
$n\to\infty$
. Several applications to a regression model and a multiple change point model are given.
We extend the construction principle of phase-type (PH) distributions to allow for inhomogeneous transition rates and show that this naturally leads to direct probabilistic descriptions of certain transformations of PH distributions. In particular, the resulting matrix distributions enable the carrying over of fitting properties of PH distributions to distributions with heavy tails, providing a general modelling framework for heavy-tail phenomena. We also illustrate the versatility and parsimony of the proposed approach in modelling a real-world heavy-tailed fire insurance dataset.
In this paper, we study a large multi-server loss model under the SQ(d) routeing scheme when the service time distributions are general with finite mean. Previous works have addressed the exponential service time case when the number of servers goes to infinity, giving rise to a mean field model. The fixed point of the limiting mean field equations (MFEs) was seen to be insensitive to the service time distribution in simulations, but no proof was available. While insensitivity is well known for loss systems, the models, even with state-dependent inputs, belong to the class of linear Markov models. In the context of SQ(d) routeing, the resulting model belongs to the class of nonlinear Markov processes (processes whose generator itself depends on the distribution) for which traditional arguments do not directly apply. Showing insensitivity to the general service time distributions has thus remained an open problem. Obtaining the MFEs in this case poses a challenge due to the resulting Markov description of the system being in positive orthant as opposed to a finite chain in the exponential case. In this paper, we first obtain the MFEs and then show that the MFEs have a unique fixed point that coincides with the fixed point in the exponential case, thus establishing insensitivity. The approach is via a measure-valued Markov process representation and the martingale problem to establish the mean field limit.
We consider the problem of estimating the rate of defects (mean number of defects per item), given the counts of defects detected by two independent imperfect inspectors on one sample of items. In contrast with the setting for the well-known method of Capture–Recapture, we do not have information regarding the number of defects jointly detected by both inspectors. We solve this problem by constructing two types of estimators—a simple moment-type estimator, and a complicated maximum-likelihood (ML) estimator. The performance of these estimators is studied analytically and by means of simulations. It is shown that the ML estimator is superior to the moment-type estimator. A systematic comparison with the Capture–Recapture method is also made.
where 0 ≤ θj ≤ 1, $S_n=\sum _{j=1}^nX_j$ and ${\cal F}_n=\sigma \{X_1,\ldots , X_n\}$. The aim of this paper is to establish the strong law of large numbers which extend some known results, and prove the moderate deviation principle for the correlated Bernoulli model.