We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We investigate the large deviation properties of the maximum likelihood estimators for the Ornstein-Uhlenbeck process with shift. We propose a new approach to establish large deviation principles which allows us, via a suitable transformation, to circumvent the classical nonsteepness problem. We estimate simultaneously the drift and shift parameters. On the one hand, we prove a large deviation principle for the maximum likelihood estimates of the drift and shift parameters. Surprisingly, we find that the drift estimator shares the same large deviation principle as the estimator previously established for the Ornstein-Uhlenbeck process without shift. Sharp large deviation principles are also provided. On the other hand, we show that the maximum likelihood estimator of the shift parameter satisfies a large deviation principle with a very unusual implicit rate function.
We consider the time behaviour associated to the sequential Monte Carlo estimate of the backward interpretation of Feynman-Kac formulae. This is particularly of interest in the context of performing smoothing for hidden Markov models. We prove a central limit theorem under weaker assumptions than adopted in the literature. We then show that the associated asymptotic variance expression for additive functionals grows at most linearly in time under hypotheses that are weaker than those currently existing in the literature. The assumptions are verified for some hidden Markov models.
Financial data are as a rule asymmetric, although most econometric models are symmetric. This applies also to continuous-time models for high-frequency and irregularly spaced data. We discuss some asymmetric versions of the continuous-time GARCH model, concentrating then on the GJR-COGARCH model. We calculate higher-order moments and extend the first-jump approximation. These results are prerequisites for moment estimation and pseudo maximum likelihood estimation of the GJR-COGARCH model parameters, respectively, which we derive in detail.
We consider Markov chain Monte Carlo algorithms which combine Gibbs updates with Metropolis-Hastings updates, resulting in a conditional Metropolis-Hastings sampler (CMH sampler). We develop conditions under which the CMH sampler will be geometrically or uniformly ergodic. We illustrate our results by analysing a CMH sampler used for drawing Bayesian inferences about the entire sample path of a diffusion process, based only upon discrete observations.
In this paper we develop a collection of results associated to the analysis of the sequential Monte Carlo (SMC) samplers algorithm, in the context of high-dimensional independent and identically distributed target probabilities. The SMC samplers algorithm can be designed to sample from a single probability distribution, using Monte Carlo to approximate expectations with respect to this law. Given a target density in d dimensions our results are concerned with d → ∞, while the number of Monte Carlo samples, N, remains fixed. We deduce an explicit bound on the Monte-Carlo error for estimates derived using the SMC sampler and the exact asymptotic relative -error of the estimate of the normalising constant associated to the target. We also establish marginal propagation of chaos properties of the algorithm. These results are deduced when the cost of the algorithm is O(Nd2).
Self-exciting point processes (SEPPs), or Hawkes processes, have found applications in a wide range of fields, such as epidemiology, seismology, neuroscience, engineering, and more recently financial econometrics and social interactions. In the traditional SEPP models, the baseline intensity is assumed to be a constant. This has restricted the application of SEPPs to situations where there is clearly a self-exciting phenomenon, but a constant baseline intensity is inappropriate. In this paper, to model point processes with varying baseline intensity, we introduce SEPP models with time-varying background intensities (SEPPVB, for short). We show that SEPPVB models are competitive with autoregressive conditional SEPP models (Engle and Russell 1998) for modeling ultra-high frequency data. We also develop asymptotic theory for maximum likelihood estimation based inference of parametric SEPP models, including SEPPVB. We illustrate applications to ultra-high frequency financial data analysis, and we compare performance with the autoregressive conditional duration models.
In this paper we study asymptotic consistency of law invariant convex risk measures and the corresponding risk averse stochastic programming problems for independent, identically distributed data. Under mild regularity conditions, we prove a law of large numbers and epiconvergence of the corresponding statistical estimators. This can be applied in a straightforward way to establish convergence with probability 1 of sample-based estimators of risk averse stochastic programming problems.
Importance sampling is a widely used variance reduction technique to compute sample quantiles such as value at risk. The variance of the weighted sample quantile estimator is usually a difficult quantity to compute. In this paper we present the exact convergence rate and asymptotic distributions of the bootstrap variance estimators for quantiles of weighted empirical distributions. Under regularity conditions, we show that the bootstrap variance estimator is asymptotically normal and has relative standard deviation of order O(n−1/4).
In this paper we study the asymptotic properties of the canonical plugin estimates for law-invariant coherent risk measures. Under rather mild conditions not relying on the explicit representation of the risk measure under consideration, we first prove a central limit theorem for independent and identically distributed data, and then extend it to the case of weakly dependent data. Finally, a number of illustrating examples is presented.
Results on asymptotic normality for the maximum likelihood estimate in hidden Markov models are extended in two directions. The stationarity assumption is relaxed, which allows for a covariate process influencing the hidden Markov process. Furthermore, a class of estimating equations is considered instead of the maximum likelihood estimate. The basic ingredients are mixing properties of the process and a general central limit theorem for weakly dependent variables.
This paper is concerned with statistical inference for both continuous and discrete phase-type distributions. We consider maximum likelihood estimation, where traditionally the expectation-maximization (EM) algorithm has been employed. Certain numerical aspects of this method are revised and we provide an alternative method for dealing with the E-step. We also compare the EM algorithm to a direct Newton–Raphson optimization of the likelihood function. As one of the main contributions of the paper, we provide formulae for calculating the Fisher information matrix both for the EM algorithm and Newton–Raphson approach. The inverse of the Fisher information matrix provides the variances and covariances of the estimated parameters.
Let Xn be a sequence of integrable real random variables, adapted to a filtration (Gn). Define Cn = √{(1 / n)∑k=1nXk − E(Xn+1 | Gn)} and Dn = √n{E(Xn+1 | Gn) − Z}, where Z is the almost-sure limit of E(Xn+1 | Gn) (assumed to exist). Conditions for (Cn, Dn) → N(0, U) x N(0, V) stably are given, where U and V are certain random variables. In particular, under such conditions, we obtain √n{(1 / n)∑k=1nX_k - Z} = Cn + Dn → N(0, U + V) stably. This central limit theorem has natural applications to Bayesian statistics and urn problems. The latter are investigated, by paying special attention to multicolor randomly reinforced urns.
In this paper we present an application of the read-once coupling from the past algorithm to problems in Bayesian inference for latent statistical models. We describe a method for perfect simulation from the posterior distribution of the unknown mixture weights in a mixture model. Our method is extended to a more general mixture problem, where unknown parameters exist for the mixture components, and to a hidden Markov model.
The secretary problem for selecting one item so as to minimize its expected rank, based on observing the relative ranks only, is revisited. A simple suboptimal rule, which performs almost as well as the optimal rule, is given. The rule stops with the smallest i such that Ri≤ic/(n+1-i) for a given constant c, where Ri is the relative rank of the ith observation and n is the total number of items. This rule has added flexibility. A curtailed version thereof can be used to select an item with a given probability P, P<1. The rule can be used to select two or more items. The problem of selecting a fixed percentage, α, 0<α<1, of n, is also treated. Numerical results are included to illustrate the findings.
We compute the posterior distributions of the initial population and parameter of binary branching processes in the limit of a large number of generations. We compare this Bayesian procedure with a more naïve one, based on hitting times of some random walks. In both cases, central limit theorems are available, with explicit variances.