1 INTRODUCTION
Many economic and financial time series exhibit explosive behavior, characterized by rapid and often unsustainable growth or decline. For example, Campbell and Yogo (Reference Campbell and Yogo2006) provided evidence that
$95\%$
confidence intervals for the AR coefficient of the S
$\&$
P 500 dividend-price ratio over a long historical period include explosive roots. To detect possible explosive behavior in economic and financial time series, extensive testing methods have been developed in recent decades under various innovations. For a discussion of these methods, we refer to a review paper by Skrobotov (Reference Skrobotov2023), together with the references therein. Among these existing methods, the most popular approach was presented in Phillips, Wu, and Yu (Reference Phillips, Wu and Yu2011) and Phillips, Shi, and Yu (Reference Phillips, Shi and Yu2014, Reference Phillips, Shi and Yu2015a, Reference Phillips, Shi and Yu2015b). In particular, Phillips et al. (Reference Phillips, Wu and Yu2011) proposed recursive tests for the presence of rational bubbles in asset prices and Phillips et al. (Reference Phillips, Shi and Yu2014) analyzed and compared the limit theory of Phillips et al. (Reference Phillips, Wu and Yu2011) under different assumptions and model specifications. Moreover, a primary technique used in the cited articles relies on the least squares (LSs) estimation theory for the so-called mildly explosive process introduced in Phillips and Magdalinos (Reference Phillips and Magdalinos2007a), hereafter PM) by
with innovation sequence
$\{u_k\}_{k\ge 1}$
and initialization
$y_0=O_P(1)$
, where
$ 0<k_n \to \infty $
satisfying
$\lim _{n\to \infty }n/k_n=\infty $
.
Denote by
$\widehat \rho _n$
the LS estimator of
$\rho _n$
in model (1.1), which is given by the formula
$$ \begin{align*} \widehat \rho_n = \big(\sum_{k=1}^n y_{k-1}^2\big)^{-1}\,\sum_{k=1}^n y_{k-1}y_k. \end{align*} $$
When
$u_k$
is a sequence of independent and identically distributed (i.i.d.) random variables with
$\mathbb {E}u_1=0$
and
$\mathbb {E}u_1^2<\infty $
, the asymptotics of
$\widehat \rho _n$
was first studied in PM in the case where
$k_n=n^\alpha $
with
$\alpha \in (0,1)$
. As in the earlier work by White (Reference White1958) and Anderson (Reference Anderson1959) in purely explosive counterparts (i.e.,
$\rho _n\equiv \rho>1$
), PM derived a Cauchy limit theory, that is,
$$ \begin{align} \frac {\rho_n^n}{\rho_n^2-1} \big(\widehat \rho_n- \rho_n\big) \to_D {\cal C}, \end{align} $$
where
${\cal C}$
denotes a Cauchy random variable, i.e.,
${\cal C}=_{D} X/Y$
, where X and Y are independent standard normal variates. A notable aspect of the asymptotic results from PM is that this Cauchy limit theory remains invariant to both the innovations
$u_k$
and the initialization
$y_0$
of the mildly explosive process, under certain constraints. The pivotal Cauchy distribution is particularly appealing for empirical applications, prompting extensive efforts to investigate models (1.1) with general dependent innovations. It is now established that the innovations
$u_k$
could be weakly dependent as in Phillips and Magdalinos (Reference Phillips and Magdalinos2007b), Oh, Lee, and Chan (Reference Oh, Lee and Chan2018) and Liu et al. (Reference Liu, Li, Gao and Yang2022); or long memory as in Magdalinos (Reference Magdalinos2012); or could involve conditional heteroskedasticity as in Lee (Reference Lee2018) and Arvanitis and Magdalinos (Reference Arvanitis and Magdalinos2018). See also Aue and Horváth (Reference Aue and Horváth2007) for the i.i.d. innovation sequence that belongs to the domain of attraction of a stable law and Magdalinos and Phillips (Reference Magdalinos and Phillips2009) for general cointegrated systems. More recently, Liu, Xiao, and Yu (Reference Liu, Xiao and Yu2021) considered anti-persistent innovations by using the two step approach developed in Phillips, Magdalinos, and Giraitis (Reference Phillips, Magdalinos and Giraitis2010). The procedure for the two step approach makes use of
$\rho _{n,m}=1+ m/n$
instead of
$\rho _n=1+\tau /k_n$
in model (1.1). When
$u_k$
is a sequence of i.i.d. random variables with
$\mathbb {E}u_1=0$
and
$\mathbb {E}u_1^2<\infty $
, it is well-known (e.g., Phillips, Reference Phillips1987) that, for each fixed m,
$$ \begin{align*} X_{n,m}:=\frac n{2 m} e^{ m} (\widehat \rho_n- \rho_{n,m}) \to_D X_m:= \frac {e^{- m} \int_0^1 J_{ m}(s) dB_s}{2m e^{-2 m} \int_0^1 J_{ m}^2(s) ds}, \end{align*} $$
where
$B=\{B_s\}_{s\ge 0}$
is a standard Brownian motion and
$J_{ m}(s)=\int _0^s e^{ m(s-r)}dB_r$
. Since
$X_m\to _D {\cal C}$
(e.g., Phillips et al., Reference Phillips, Magdalinos and Giraitis2010, p. 275) as
$m\to \infty $
, intuitively, we should have that
$X_{n, \tau n/k_n}\to _D {\cal C}$
or, equivalently,
$\frac {\rho _n^n}{\rho _n^2-1} \big (\widehat \rho _n- \rho _n\big )\to _D {\cal C} $
. A genuine proof for such a claim, however, requires to show that, for each
$\epsilon>0$
,
where
$d(.,.)$
denotes a metric that measures the distance between
$X_{n,m}$
and
$X_{n, {\tau n/k_n}}$
(see, for instance, Billingsley (Reference Billingsley1968, Thm. 4.2). The proof of (1.3) did not appear in existing work and also seems difficult to proceed.
This article has a similar goal to the aforementioned works. Unlike existing literature, where asymptotic theory typically depends on the individual structure of the innovations, this article provides a uniform Cauchy limit theory. Our first result (e.g., Theorem 2.1) allows for the
$\{u_k\}_{k\ge 1}$
to be a general linear process with martingale difference innovations, ensuring that the Cauchy limit theory in (1.2) holds for long memory, short memory, and anti-persistent innovation within a unified framework. In particular, the Cauchy limit theory is shown to be invariant for ARIMA
$(p, d, q)$
innovations with the differential parameter
$d\in (-\frac 12, \frac 12)$
. As it is well-known in the literature, for an ARIMA
$(p, d, q)$
process, the cases
$0<d<1/2, d=0$
and
$-1/2<d<0$
correspond to the long memory, short memory, and anti-persistent innovation, respectively. Furthermore, except in the case of anti-persistent innovations, our Cauchy limit theory holds for any regression coefficient
$\rho _n=1+\tau /k_n$
with
$0<k_n\to \infty $
satisfying
$n/k_n\to \infty $
, rather than requiring
$k_n=n^\alpha $
for some
$\alpha \in (0,1)$
, as considered in previous works. In situations involving anti-persistent innovations, the Cauchy limit theory may be violated when
$\rho _n=1+\tau /k_n$
is very close to the local to unity range such as
$k_n =n/m_n$
with
$m_n=c_0 \log n$
for some small
$0< c_0<\infty $
. In our second result (e.g., Theorem 2.2), we establish the Cauchy limit theory when
$\{u_k\}_{k\ge 1}$
is a wide class of nonlinear processes such as stationary causal processes and nonlinear autoregressive time series (e.g., threshold autoregressive (TAR) models and bilinear models). Some of these results are new to the literature.
This article is organized as follows. Section 2 presents our main results. The extension to a varying drift model is investigated in Section 3, where we provide a necessary and sufficient condition so that the Cauchy limit theory given in the main results of Section 3 is invariant to the unknown drift. Section 4 concludes. All proofs are provided in Section 5. Throughout the article, we denote positive constants by
$C, C_1, C_2,\ldots $
which may be different at each appearance. Other notations are standard in the literature.
2 MAIN RESULTS
Suppose that
$\big \{\epsilon _k, {\cal F}_k\big \}_{k\in \mathbb {Z}}$
is a sequence of martingale differences satisfying one of the following conditions:
-
C1.
$\mathbb {E} \epsilon _k^2=\sigma ^2$
for all
$k\in \mathbb {Z}$
,
$\{\epsilon _k^2\}_{k\in \mathbb {Z}}$
is uniformly integrable and
$\max _k \mathbb {E} \eta _k^2<\infty $
and
$\max _{|j-k|\ge K}|cov (\eta _j, \eta _k)|\to 0$
, as
$K\to \infty $
, where
$\eta _k=\mathbb {E} (\epsilon _k^2|{\cal F}_{k-1})$
; -
C2.
$\big \{\epsilon _k\big \}_{k\in \mathbb {Z}}$
is stationary and ergodic with
$\mathbb {E} \epsilon _1^2=\sigma ^2$
.
This section will establish the Cauchy limit theory in (1.2) when
$u_k$
in model (1.1) is a linear process generated by
$\epsilon _k$
and when
$u_k$
is a sequence of random variables that may be restructured as
$u_k=\epsilon _k+z_{k-1}-z_k$
for some
$z_k$
satisfying
$\sup _{k\ge 0}\mathbb {E} z_k^2<\infty $
.
2.1 Cauchy Limit Theory for General Linear Processes
Consider a linear process
$\{u_k\}_{ k\in \mathbb {Z}} $
defined by
$u_k=\sum _{j=0}^\infty \psi _j \epsilon _{k-j}$
, where
${\sum _{j=0}^\infty \psi _j^2<\infty }$
. Denote by
$\gamma (j), j\ge 0, $
the co-variances of
$\{u_k\}_{ k\in \mathbb {Z}} $
, i.e.,
$$ \begin{align*} \gamma (j) = \mathbb{E}\big(u_0 u_j\big)=\sigma^2\,\sum_{k=0}^\infty \psi_k\psi_{j+k}, \quad j=0,1,\ldots \end{align*} $$
Write
$\Gamma _n=\frac 12 {\gamma (0)}+ \sum _{j=1}^n \rho _n^{-j}\gamma (j)$
and
$\sigma _n^2= \tau ^{-1} \, k_n\, \Gamma _n=(\rho _n-1)^{-1}\, \Gamma _n,$
where
${\tau>0}$
,
$ 0<k_n \to \infty $
satisfying
$\lim _{n\to \infty }n/k_n=\infty $
and
$\rho _n=1+\tau /k_n$
as given in model (1.1). Our first result on the Cauchy limit theory is as follows.
Theorem 2.1. Suppose that
-
(a)
$\sigma _n^2\to \infty $
; -
(b)
$\sum _{j=n-k_n}^{n}|\gamma (j)|=O\big ( \sum _{j=0}^{n-k_n} |\gamma (j)|\big )$
; and -
(c)
$\sum _{j=0}^{n} (n-j) \rho _n^{j-n}\,|\gamma (j)|=o(\sigma _n^2)$
.
Then, (1.2) holds.
Theorem 2.1 provides a quite general framework for a Cauchy limit theory when
$u_k$
is a general linear process with martingale difference innovations. Condition
$\sigma _n^2\to \infty $
is necessary to establish (1.2). Indeed, as seen in Lemma 5.2,
$ var \big (\sum _{j=1}^{n} \rho _n^{-j}u_j\big )=\big [1+o(1)\big ]\,\sigma _n^2$
under minor additional conditions, indicating that (1.2) will fail if
$\sigma _n^2$
is finite. Since
$k_n/n\to 0$
, condition (b) is trivially true for many empirical examples such as the co-variances
$\gamma (j)=\mathbb {E} (u_0u_j), j\ge 0, $
satisfying
$\sum _{j=0}^\infty |\gamma (j)|<\infty $
or
$\gamma (j)\sim C\,j^{-\beta }$
for any
$\beta>0$
. Condition (c) implies that
$n\rho _n^{-n}=o(\sigma _n^2)$
due to
$\gamma (0)>0$
, providing a trade-off between
$\rho _n$
(or equivalently
$k_n$
) and
$\gamma (j)$
. The Cauchy limit theory in (1.2) depends on the asymptotic independence between
$\sum _{j=1}^{n} \rho _n^{-j}u_j$
and
$\sum _{j=1}^{n} \rho _n^{j-n}u_j$
, which in turn requires the fact:
$$ \begin{align} {cov} \big(\sum_{j=1}^{n} \rho_n^{-j}u_j, \sum_{j=1}^{n} \rho_n^{j-n}u_j\big) \sim \sum_{j=0}^{n}(n-j) \rho_n^{j-n}\,\gamma(j) = o(\sigma_n^2). \end{align} $$
See the proof of Lemma 5.2 for details. In terms of (2.1), condition (c) is necessary in case that
$\gamma (j)\ge 0 $
for all
$ j\ge 0$
. To illustrate the applications of Theorem 2.1, we introduce the following conditions on the co-variances
$\gamma (j)=\mathbb {E} (u_0u_j), j\ge 0. $
-
C3.
-
(i)
$\gamma (j) \sim j^{-\beta }\,l(j)$
for some
$0<\beta <1$
, where
$l(x)$
is a slowly varying function at infinity and eventually continuous as
$x\to \infty $
; -
(ii)
$\sum _{j=0}^\infty |\gamma (j)|<\infty $
and
$\sum _{j\in \mathbb {Z}} \, \gamma (j) \not =0$
; -
(iii)
$\sum _{j\in \mathbb {Z}} \, \gamma (j)=0$
,
$\sum _{j=0}^\infty |\gamma (j)|<\infty $
and
$\gamma (j) \sim C_\beta \, j^{-1-\beta }$
for some
${0<\beta <1}$
.
-
Corollary 2.1. If C3(i) or C3(ii) holds, we have (1.2) for any
$ {0<k_n \to \infty} $
satisfying
$\lim _{n\to \infty }n/k_n=\infty $
. If, in addition to C3(iii),
$ 0<k_n \to \infty $
and
$nk_n^{\beta -1}e^{-\tau n/k_n}\to 0$
, we still have (1.2).
The proof of Corollary 2.1 depends on the estimation of
$\sigma _n^2$
or equivalently
$\Gamma _n$
under C3. The required results are summarized in the following proposition, which is of interest in its own right.
Proposition 2.1. If C3(i) holds, then
$$ \begin{align} \frac {l^{-1}(k_n)}{\, k_n^{1-\beta}}\,\,\Gamma_n \to \int_{0}^\infty e^{-\tau x} x^{-\beta}dx. \end{align} $$
If C3(ii) holds, then
$$ \begin{align} \Gamma_n\to \frac 12 \sum_{j\in \mathbb{Z}} \, \gamma(j) =\frac 12\gamma(0)+ \sum_{j=1}^\infty \gamma(j). \end{align} $$
If C3(iii) holds, then
Moreover, if
$\gamma (j)$
satisfies one of C3(i)–(iii), then
$$ \begin{align} \frac 12 \gamma(0)+\frac 1n\, \sum_{j=1}^n (n-j)\gamma(j) = o(n \Gamma_n/k_n). \end{align} $$
Remark 2.1. It is well-known that
$u_k$
is a long-memory, a short memory, and an anti-persistent process if
$\gamma _k=\mathbb {E} (u_0u_k), k\ge 0,$
satisfies C3(i), C3(ii), and C3(iii), respectively. The Cauchy limit theory for long and short memory processes has been considered with
$k_n=n^{\alpha }$
for some
$\alpha \in (0,1)$
in PM, Magdalinos (Reference Magdalinos2012) and Arvanitis and Magdalinos (Reference Arvanitis and Magdalinos2018). Corollary 2.1 generalizes these existing results by allowing for any
$ 0<k_n \to \infty $
satisfying
$\lim _{n\to \infty }n/k_n=\infty $
. Note that, for the long memory process
$u_k=\sum _{i=0}^\infty \psi _i \epsilon _{k-i}$
with
$\psi _i=i^{-\kappa }L(i),$
where
$\kappa \in (1/2, 1)$
,
$$ \begin{align*} \gamma(j) =\sigma^2\, \sum_{i=1}^\infty \psi_i \psi_{i+j}\ \sim\ \sigma^2\, j^{1-2\kappa} L^2(j) \int_0^\infty (y+1)^{-\kappa}y^{-\kappa} dy. \end{align*} $$
An application of (2.2) yields Lemma 1(ii) of Magdalinos (Reference Magdalinos2012),Footnote 1 i.e.,
$$ \begin{align} \frac {\sigma_n^2}{k_n^{3-2\kappa}L^2(k_n)} &= \frac {\tau^{-1}\Gamma_n}{k_n^{2-2\kappa}L^2(k_n)} \nonumber\\ &\to \sigma^2\, \tau^{2\kappa-3}\,\int_0^{\infty} e^{- x} x^{1-2\kappa}dx \int_0^\infty (y+1)^{-\kappa}y^{-\kappa} dy. \end{align} $$
Remark 2.2. Using the two-step approach developed in Phillips et al. (Reference Phillips, Magdalinos and Giraitis2010), Liu et al. (Reference Liu, Xiao and Yu2021) considered the Cauchy limit theory for anti-persistent innovations under C3(iii), but requiring an additional proof as explained in Section 1. There is an additional restriction for the
$k_n\to \infty $
under C3(iii), i.e.,
${nk_n^{\beta -1}e^{-\tau n/k_n}\to 0}$
or equivalently
$n\rho _n^{-n} =o(\sigma _n^2)$
due to (2.4). Note that, under C3(iii),
$n\rho _n^{-n} =o(\sigma _n^2)$
if and only if
$\sum _{j=0}^{n}(n-j) \rho _n^{j-n}\,\gamma (j) = o(\sigma _n^2)$
.Footnote
2
In terms of (2.1), this additional restriction
$nk_n^{\beta -1}e^{-\tau n/k_n}\to 0$
or
is necessary to establish (1.2), indicating that the Cauchy limit theory for anti-persistent innovations under C3(iii) is violated when
$\rho _n$
is very close to the local to unity range:
$k_n =n/m_n$
with
$m_n=\frac {1+\tau _0}\tau \log n$
for any
$-1<\tau _0< 1-\beta $
. This fact seems new to the literature.
Remark 2.3. A typical example for the co-variances
$\gamma (j)=\mathbb {E} (u_0u_j), j\ge 0, $
satisfying C3 is the
$ARIMA(p,d,q)$
process
$\{u_k\}_{k\ge 1}$
defined by
where
$d\in (-\frac 12, \frac 12)$
; B is a back-shift operator and
$\epsilon _k$
are i.i.d. random variables with zero mean and finite variance;
$\phi (B)$
and
$\theta (B)$
are polynomial functions of B with order p and q, respectively, and both of them only have roots outside the unit circle, i.e., the
$ARMA(p,q)$
process
$\eta _k$
is taken to be stationary and invertible. The fractional difference operator
$(1-B)^{\gamma }$
is defined by its Maclaurin series (by its binomial expansion, if
$\gamma $
is an integer):
$$ \begin{align*} (1-B)^{\gamma}=\sum_{j=0}^{\infty}\frac {\Gamma(-\gamma+j)}{\Gamma(-\gamma)\Gamma(j+1)}B^j\quad \text{where} \ \ \Gamma(z)=\Big \{\begin{array}{ll} \int_0^{\infty}s^{z-1}e^{-z}ds &\text{if } z>0\\ \infty &\text{if } z=0. \end{array} \Big. \end{align*} $$
If
$z<0$
,
$\Gamma (z)$
is defined by the recursion formula
$ z\Gamma (z)=\Gamma (z+1). $
It is well-known that if
$d=0$
,
$\gamma (k)=\mathbb {E} (u_0u_k)$
satisfies
$\sum _{k=0}^\infty |\gamma (k)|<\infty $
, i.e., C3(ii) is satisfied. Moreover, for some constant
$C_d$
depending only on d, we have
and
$\sum _{j\in \mathbb {Z}} \, \gamma (j)=0$
if
$d<0$
(see, e.g., Hosking (Reference Hosking1981) or Kokoszka and Taqqu (Reference Kokoszka and Taqqu1995)). Based on these facts, the following result is a direct corollary of Corollary 2.1.
Corollary 2.2. Suppose the innovation sequence
$\{u_k\}_{k\ge 1}$
in model (1.1) is given by (2.7).
-
(a) If
$0\le d<1/2$
, we have (1.2) for any
$ 0<k_n \to \infty $
satisfying
$\lim _{n\to \infty }n/k_n=\infty $
. -
(b) If
$-1/2< d<0$
, we have (1.2) for any
$ 0<k_n \to \infty $
satisfying
$$ \begin{align*}\lim\inf_{n\to\infty}n\log ^{-1}n /k_n> (2+d)/\tau.\end{align*} $$
-
(c) Moreover, the Cauchy limit theory in (1.2) is invariant with
$d\in (-\frac 12, \frac 12)$
for any
$0<k_n\to \infty $
satisfying
$\lim \inf _{n\to \infty }n\log ^{-1}n /k_n>3\tau ^{-1}/2$
.
Remark 2.4. It would be interesting to explore the asymptotic theory of
$\widehat \rho _n$
in model (1.1) when
$\rho _n$
is very close to the local to unity range, particularly in the case where the Cauchy limit theory is violated for the anti-persistent innovations (e.g.,
$-1/2<d<0$
in the context of an
$ARIMA(p,d,q)$
process). Since the asymptotic behavior depends not only on the choice of
$k_n$
, but also on the specified innovation structure, this topic seems to be extremely challenging and beyond the scope of this article, and thus it is left for future research.
2.2 Cauchy Limit Theory for Other Dependent Processes
While Theorem 2.1 is sufficiently general in allowing
$u_k$
in model (1.1) to be a linear process defined as
$u_k=\sum _{j=0}^\infty \psi _j \epsilon _{k-j}$
, where
$\sum _{j=0}^\infty \psi _j^2<\infty $
, it does not encompass many important practical models, such as those with innovations exhibiting nonlinear structures. To fill the gap, this section considers the model (1.1) with
$u_k$
defined by
where
$\big \{\epsilon _k, {\cal F}_k\big \}_{k\ge 1}$
is a sequence of martingale differences satisfying one of C1 and C2, and
$z_k$
is an arbitrary random variable satisfying
$\sup _{k\ge 0}\mathbb {E} z_k^2<\infty $
. The following theorem shows that the Cauchy limit theory still holds for such dependent processes. This result seems new to the literature.
Theorem 2.2. For any
$ 0<k_n \to \infty $
satisfying
$\lim _{n\to \infty }n/k_n=\infty $
, (1.2) holds under model (1.1) with innovation sequence
$\{u_k\}_{k\ge 1}$
defined by (2.8).
A wide class of dependent processes
$u_k$
can be expressed in a form as in (2.8). To illustrate, let
$ \eta _i, i\in \mathbb {Z},$
be i.i.d. random variables with
$\mathbb {E} \eta _0=0$
and
$\mathbb {E} \eta _0^2=1$
. Consider a stationary causal process
$u_k$
defined by
where F is a measurable function such that
$\mathbb {E}u_0=0$
and
$0<\mathbb {E}u_0^2<\infty $
. Write
${\cal F}_k=\sigma (\eta _i, i\le k)$
,
$|| Z ||_p=(\mathbb {E}|Z|^p)^{1/p}<\infty $
and denote
${\cal P}_k Z= \mathbb {E} (Z|{\cal F}_k)-\mathbb {E} (Z|{\cal F}_{k-1})$
for any
$\mathbb {E}|Z|<\infty $
. Set
$$ \begin{align*} \epsilon_{k}=\sum\limits_{i=0}^\infty {\cal P}_k u_{i+k}, \quad z_{k}=\sum\limits_{i=1}^\infty \mathbb{E} (u_{i+k}|\mathcal{F}_k). \end{align*} $$
Proposition 2.2. Suppose that
$\sum \limits _{k=1}^{\infty } k\, {|| {\cal P}_0 u_{k}||_2}<\infty $
. We have
$u_k= \epsilon _k+z_{k-1}-z_k$
and that
-
(a)
$\{\epsilon _k, {\cal F}_k\}_{k\ge 1}$
is a sequence of stationary martingale differences with
$\mathbb {E} \epsilon _k^2<\infty $
, and -
(b)
$\{z_k\}_{k\ge 1}$
is a stationary process with
$\mathbb {E} z_k=0$
and
$\mathbb {E} z_k^2<\infty $
.
Proof. It follows from Lemma 7 of Wu and Min (Reference Wu and Min2005) (i.e., (35) in the cited paper) that
$\mathbb {E} (\epsilon _k^2+z_k^2)<\infty $
. Other facts are obvious and hence the details are omitted.
Utilizing Proposition 2.2, we can conclude from Theorem 2.2 that the Cauchy limit theory is valid in the model (1.1), where the innovations
$u_k$
exhibit the nonlinear structures illustrated in the following examples. For a comprehensive discussion on the causal processes and related examples, we refer to Wu (Reference Wu2005, Reference Wu2007) and Wu and Min (Reference Wu and Min2005).
Example 2.1. (TAR model)
with
$\max (|\phi _1|,|\phi _2|)<1$
and
$\mathbb {E} (|\eta _0|^q)<\infty $
for some
$q>0$
.
Example 2.2. (Bilinear model)
where
$\alpha _1$
and
$\beta _1$
are real parameters so that
$\mathbb {E}|\alpha _1+\beta _1 \eta _0|^q<1$
for some
$q>0$
.
Example 2.3. (GARCH model)
$$ \begin{align*} u_k=\sqrt{h_k}\eta_k ,\quad h_k=\alpha_0+\sum_{i=1}^m \alpha_i u_{k-i}^2+\sum_{j=1}^l \beta_j h_{t-j}, \end{align*} $$
where
$\alpha _0>0,~\alpha _j\geq 0$
for
$1\leq j\leq m$
,
$\beta _i\geq 0$
for
$1\leq i\leq l$
, and
$\sum \limits _{i=1}^m \alpha _i +\sum \limits _{j=1}^l \beta _j<1$
.
Remark 2.5. The decomposition (2.8) (hence the Cauchy limit theory) still holds if
$u_k$
is a sequence of stationary
$\alpha $
-mixing random variables with
$\mathbb {E}u_1=0$
,
$\mathbb {E}u_1^2<\infty $
and coefficient
$\alpha (n)=O(n^{-2-\epsilon })$
for some
$\epsilon>0$
(see Davidson (Reference Davidson1994, Thm. 16.6)). For the Cauchy limit theory with mixing innovations, the related results can be found in Phillips and Magdalinos (Reference Phillips and Magdalinos2007b), Oh et al. (Reference Oh, Lee and Chan2018) and Liu et al. (Reference Liu, Li, Gao and Yang2022).
3 EXTENSION TO MODELS WITH A VARYING DRIFT
Consider the mildly explosive process with a varying drift:
where
$y_0=O_P(1)$
,
$ 0<k_n \to \infty $
satisfying
$\lim _{n\to \infty }n/k_n=\infty $
and the drift
$\alpha _n$
may depend on n. When the drift
$\alpha _n$
is unknown, the LS estimator for (
$\alpha _n, \rho _n)$
is given by
$$ \begin{align*} \begin{bmatrix} \widehat \alpha_n\\ \widehat \rho_n \end{bmatrix} = \begin{bmatrix} n & \sum_{k=1}^ny_{k-1}\\ \sum_{k=1}^ny_{k-1} & \sum_{k=1}^ny_{k-1}^2 \end{bmatrix}^{-1}\, \begin{bmatrix} \sum_{k=1}^ny_{k}\\ \sum_{k=1}^ny_{k-1}y_k \end{bmatrix}. \end{align*} $$
In addition to Theorems 2.1 and 2.2, this section investigates the impact of the drift
$\alpha _n$
to the asymptotics of
$\hat \rho _n$
, as shown in the following theorems. Except mentioned explicitly, notations are the same as in Section 2.
Theorem 3.1. Suppose that
$u_k$
is a linear process as given in Section 2.1. Write
$$ \begin{align*} l_n=\tau^{-1}k_n\alpha_n/\sigma_n, \qquad \widetilde l_n= \left\{\begin{array}{@{}ll} l_n & \text{if } |l_n|\to \infty, \\ 1 & \text{if } l_n\to l \text{ with } |l|<\infty \end{array}\right. \end{align*} $$
and
$\Gamma _n^*=\frac 12\gamma (0)+\frac 1n \sum _{j=1}^n (n-j)\gamma (j).$
If, in addition to the conditions of Theorem 2.1,
$k_n\Gamma _n^*=o(n \Gamma _n)$
, then
$$ \begin{align} \frac {\widetilde l_n\,\rho_n^n}{\rho_n^2-1} \, \big(\widehat \rho_n- \rho_n\big) \to_D \left\{\begin{array}{@{}lll} X, &\text{if } |l_n|\to \infty \\ X/(Y+l), &\text{if } l_n\to l \text{ with } |l|<\infty \end{array}\right.\!\!\!\!, \end{align} $$
where X and Y are two independent standard normal variates.
Theorem 3.2. Suppose that
$u_k$
is defined as in (2.8). Write
$ l_n=\sqrt 2 \sigma \,\tau ^{-1/2}k_n^{1/2}\alpha _n$
and
$$ \begin{align*} \widetilde l_n= \left\{\begin{array}{@{}ll} l_n & \text{if } |l_n|\to \infty, \\ 1 & \text{if } l_n\to l \text{ with } |l|<\infty.\end{array}\right. \end{align*} $$
Then, (3.2) holds for any
$k_n\to \infty $
satisfying
$n/k_n\to \infty $
.
Remark 3.1. Theorems 3.1 and 3.2 suggest that the Cauchy limit theory given in Theorems 2.1 and 2.2 is invariant to the unknown drift
$\alpha _n$
if and only if
$l_n\to 0$
, i.e.,
-
•
$\alpha _n=o\big [(\Gamma _n/k_n)^{1/2}\big ]$
if
$u_k$
is a linear process as given in Section 2.1
Footnote
3
; -
•
$\alpha _n= o(k_n^{-1/2})$
if
$u_k$
is defined as in (2.8).
An autoregressive model with varying drift has been considered in Phillips and Jin (Reference Phillips and Jin2014) for testing the martingale hypothesis, and in Phillips et al. (Reference Phillips, Shi and Yu2015a) for financial bubbles. Liu and Peng (Reference Liu and Peng2019) considered a similar result to (3.2) with specified
$\alpha _n=c_1/n^{\alpha _1}, 0<\alpha _1<1$
and
$\rho _n=c_2/n^{\alpha _2}$
(i.e.,
$k_n=n^{\alpha _2}$
),
$0<\alpha _2<1$
in the i.i.d.
$u_k$
innovations.
Remark 3.2. In comparison with Theorems 2.1 and 2.2, a minor additional condition
$k_n\Gamma _n^*=o(n \Gamma _n)$
is used in Theorem 3.1. This additional condition is easy to verify. Indeed, as seen in (2.5) of Proposition 2.1, if C3 is satisfied, then
$k_n\Gamma _n^*=o(n \Gamma _n)$
for any
$ 0<k_n \to \infty $
satisfying
$\lim _{n\to \infty }n/k_n=\infty $
.
Remark 3.3. Note that
$\widehat \alpha _n=\frac 1n \sum _{k=1}^n y_k- \frac {\widehat \rho _n}n \sum _{k=1}^n y_{k-1}.$
It is easy to see that
$$ \begin{align*} \widehat \alpha_n -\alpha_n =\frac 1n \sum_{k=1}^n u_k -\frac {\widehat \rho_n-\rho_n}n\sum_{k=1}^n y_{k-1} \end{align*} $$
so that
$\sqrt {n/\Gamma _n^*}\big (\widehat \alpha _n -\alpha _n\big ) \to _D N(0,1)$
.
4 CONCLUSION
Since the pioneering work of Phillips and Magdalinos (Reference Phillips and Magdalinos2007a), the asymptotic theory for mildly explosive autoregression has been extensively explored under a variety of innovation structures. This study offers a unifying framework that encompasses much of the existing literature, demonstrating that Cauchy limit theory remains invariant across a broad class of error processes. These include general linear processes with martingale difference innovations, stationary causal processes, and nonlinear autoregressive models such as TAR and bilinear models. Our results unify the Cauchy limit theory under long memory, short memory, and antipersistent innovations within a single theoretical framework. In particular, we show that the theory holds for innovations generated by ARIMA
$(p,d,q)$
processes when the differencing parameter
$d\in (-1/2, 1/2)$
. The author hopes that these findings will prove useful in related areas, especially in detecting explosive behavior in economic and financial time series.
5 PROOFS
We start with some preliminaries in Section 5.1, where the proof of a key lemma (Lemma 5.2) is given in Section 5.2. The proofs of main results are given in Sections 5.3–5.8, respectively.
5.1 Preliminary Lemmas
As in Section 2.1, suppose that
$\big \{\epsilon _k, {\cal F}_k\big \}_{k\in \mathbb {Z}}$
is a sequence of martingale differences satisfying one of C1 or C2, and
$u_k=\sum _{j=0}^\infty \psi _j \epsilon _{k-j}$
, where
$\sum _{j=0}^\infty \psi _j^2<\infty $
. For each
$1\le l\le m,$
let
$$ \begin{align*} Z_{n, l} = \sum_{k=1}^n b_{nk, l} u_k \quad \text{and}\quad \sigma_{n, l}^2 =var(Z_{n, l}), \end{align*} $$
where
$b_{nk, l}$
is a sequence of constants. First, Lemma 5.1 is a corollary of Theorem 2.3 in Abadir et al. (Reference Abadir, Distaso, Giraitis and Koul2014).
Lemma 5.1. Suppose that, for each
$1\le l\le m,$
$$ \begin{align} |b_{n1, l}|+\sum_{k=2}^n |b_{nk, l}-b_{nk-1, l}| =o(\sigma_{n, l}), \end{align} $$
and there exists a positive-definite matrix
$\Omega $
so that
Then, as
$n\to \infty $
, we have
We next introduce our second lemma. Its proof is given in Section 5.2. Recall that
$\rho _n=1+\frac {\tau }{k_n}$
, where
$\tau>0$
and
$ 0<k_n\to \infty $
satisfying
$\lim _{n\to \infty }n/k_n=\infty $
and we still use the notations
$\gamma (k), \Gamma _n$
and
$\sigma _n^2$
as in Section 2.1. Write
$$ \begin{align*} S_{n, 1} = \sum_{k=1}^{M_{1n}} \rho_n^{-k}\, u_k \quad \text{and}\quad S_{n, 2}=\sum_{k=M_{2n}}^{n} \rho_n^{k-n}\, u_k , \end{align*} $$
where
$M_{1n}$
and
$M_{2n}$
are two sequences of positive integers satisfying that
$1\le M_{in}\le n$
for
$i=1$
and
$2$
.
Lemma 5.2. Under the conditions of Theorem 2.1, for any
$M_{1n}/k_n\to \infty $
and
$M_{2n}/n\to 0$
, we have
and
where
$S_1, S_2\sim N(0,1)$
and
$S_1$
is independent of
$S_2$
.
Remark 5.1. (5.4) implies that for any
$m_n$
satisfying
$m_n/k_n\to \infty $
and
$m_n/n\to 0$
,
where
$ \widehat S_{n, 1} =\sum _{k=m_n}^{n} \rho _n^{-k}\, u_k $
and
$ \widehat S_{n, 2}=\sum _{k=1}^{m_n} \rho _n^{k-n}\, u_k. $
Furthermore, the same arguments given in the proof of (5.4) yield that, uniformly for
$1\le j\le n$
,
where
$a_j=\sum _{k=1}^{j} \rho _n^{-k}\, u_k$
,
$b_j=\sum _{k=j}^{n} \rho _n^{-k}\, u_k$
, and
$A_0$
is an absolute constant.
Remark 5.2. Results (5.4)–(5.7) still hold if
$u_k$
is replaced by
$w_k=u_k+z_{k-1}-z_k$
, where
$z_k$
is an arbitrary random variable satisfying
$\sup _{k\ge 1}\mathbb {E} z_k^2<\infty $
. To illustrate, we consider (5.4). In fact, for any
$M\ge 1$
, we have
$$ \begin{align*} A_M:=\sum_{k=1}^{M} \rho_n^{-k} (z_{k-1}-z_k) = \rho_n^{-M} z_M +(\rho_n^{-1}-1)\sum_{k=1}^{M-1} \rho_n^{-k} z_k. \end{align*} $$
Note that, by Hólder’s inequality,
$$ \begin{align*} \mathbb EA_M^2&\le \mathbb{E} z_1^2 + (\rho_n^{-1}-1)^2 \sum_{k=1}^{M-1}\rho_n^{-k} \sum_{k=1}^{M-1} \rho_n^{-k} \mathbb E z_k^2 \nonumber\\ &\le \mathbb{E} z_1^2 + C\, (\rho_n^{-1}-1)^2 \big(\sum_{k=1}^{M-1}\rho_n^{-k}\big)^2 \le C\, \end{align*} $$
uniformly for
$1\le M\le n$
. It follows from
$\sum _{k=1}^{M_{1n}} \rho _n^{-k}\, w_k = S_{n,1}+A_{M_{1n}}$
that
$$ \begin{align*} var\big(\sum_{k=1}^{M_{1n}} \rho_n^{-k}\, w_k\big) &= var (S_{n,1}) +var (A_{M_{1n}}) +2 cov (S_{n1}, A_{M_{1n}}) \nonumber\\ &= var (S_{n, 1}) +o(\sigma_n^2) =\big[1+o(1)\big]\, \sigma_n^2. \end{align*} $$
Similarly, we have
$$ \begin{align*} var\big(\sum_{k=M_{2n}}^n \rho_n^{k-n}\, w_k \big) = var(S_{n,2})+o(\sigma_n^2)=\big[1+o(1)\big]\, \sigma_n^2. \end{align*} $$
Hence, (5.4) holds if
$u_k$
is replaced by
$w_k=u_k+z_{k-1}-z_k$
.
5.2 Proof of Lemma 5.2
First note that, for any
$M_{1n}/k_n\to \infty $
and
$M_{2n}/n\to 0$
,
$$ \begin{align} \frac 1{k_n}\sum_{k=1}^{M_{1n}} \rho_n^{-2k} =\frac {1- \rho_n^{-2M_{1n}} }{k_n ( \rho_n^2-1)} \to (2\tau)^{-1} \end{align} $$
and
$$ \begin{align} \frac 1{k_n}\sum_{k=M_{2n}}^n \rho_n^{2(k-n)} =\frac 1{k_n}\sum_{k=0}^{n-M_{2n}} \rho_n^{-2k} \to (2\tau)^{-1}. \end{align} $$
Without loss of generality, assume that
$M_{1n}=n$
and
$M_{2n}=1$
in the proof for the convenience of notation. It is readily seen from (5.8) that
$$ \begin{align} var(S_{n, 1}) &= \sum_{k=1}^{n}\rho_n^{-2k}\, \gamma(0)+2\sum_{k=1}^{n} \sum_{j=k+1}^{n} \rho_n^{-k}\rho_n^{-j} \gamma(j-k) \nonumber\\ &= \sum_{k=1}^{n}\rho_n^{-2k}\, \gamma(0)+2\sum_{k=1}^{n} \rho_n^{-2k}\sum_{j=1}^{{n}-k} \rho_n^{-j} \gamma(j) \nonumber\\ &=2\, \Gamma_n\,\sum_{k=1}^{n}\rho_n^{-2k}\,-R_n = \big[1+o(1)\big]\, \sigma_n^2 -R_n, \end{align} $$
where
$R_n = 2\, \sum _{k=1}^{n} \rho _n^{-2k} \,\sum _{j={n}-k+1}^{n} \rho _n^{-j} \gamma (j).$
We may write
$$ \begin{align*} R_n &=2\,\sum_{j=1}^{n} \rho_n^{-j} \gamma(j) \sum_{k=n-j+1}^n \rho_n^{-2k} =2 \rho_n^{-2n}\, \sum_{j=1}^{n} \rho_n^{-j} \gamma(j) \sum_{k=0}^{j-1} \rho_n^{2k} \nonumber\\ &=\frac {2\rho_n^{-2n}}{\rho_n^2-1}\,\big[\sum_{j=1}^{n} \rho_n^{j} \gamma(j) - \sum_{j=1}^{n} \rho_n^{-j} \gamma(j) \big]. \end{align*} $$
It follows from conditions (a) and (b) of Theorem 2.1 and
$k_n/n\to 0$
that
$$ \begin{align*} |R_n| &\le C\, k_n\, \rho_n^{-n}\sum_{j=1}^{n} |\gamma(j)| \le C_1\,k_n\, \rho_n^{-n}\sum_{j=1}^{n-k_n} |\gamma(j)| \\ &\le C\, \sum_{j=1}^{n-k_n} (n-j)\rho_n^{j-n}|\gamma(j)| =o(\sigma_n^2). \end{align*} $$
Taking this estimate into (5.10), we obtain
$ var(S_{n, 1})= \big [1+o(1)\big ]\, \sigma _n^2$
. Similarly, we have
$ var(S_{n, 2}) = \big [1+o(1)\big ]\, \sigma _n^2$
. This proves (5.4).
We next prove (5.5). Let
$b_{nk, 1}=\rho _n^{-k}$
and
$b_{nk, 2}=\rho _n^{k-n}$
. It is routine to see that, for
$l=1$
and
$2$
,
$$ \begin{align*} |b_{n1, l}|+\sum_{k=2}^n |b_{nk, l}-b_{nk-1, l}| \le \rho_n^{-1}+ (1-\rho_n^{-1})\, \sum_{k=1}^n \rho_n^{-k} \le C<\infty. \end{align*} $$
By using Lemma 5.1, result (5.5) will follow if we prove
In fact, by noting
$\gamma (j)=\gamma (-j)$
and recalling
$\sum _{k=0}^{n} (n-k) \rho _n^{k-n}\,|\gamma (k)|=o(\sigma _n^2)$
, simple calculation shows that
$$ \begin{align*} E \big(\ S_{n, 1} S_{n, 2} \big) &=\sum_{k=1}^{n} \sum_{j=1}^{n} \rho_n^{k-j-n}\,\gamma(k-j)= \sum_{j=1}^{n} \sum_{k=1-j}^{n-j} \rho_n^{k-n}\,\gamma(k)\nonumber\\ &= \sum_{j=1}^n \sum_{k=1}^{n-j} \rho_n^{k-n}\,\gamma(k)+ \sum_{j=1}^n \sum_{k=0}^{j-1} \rho_n^{-k-n}\,\gamma(k)\nonumber\\ &= \sum_{k=0}^{n-1}(n-k) \rho_n^{k-n}\,\gamma(k)+ \sum_{k=0}^{n-1} (n-k)\rho_n^{-k-n}\,\gamma(k) \nonumber\\ &= o(\sigma_n^2). \end{align*} $$
This proves (5.11) and also completes the proof of Lemma 5.2.
5.3 Proof of Theorem 2.1
Without loss of generality, assume
$y_0=0$
. First note that, by squaring (1.1) and summing over
$k\in \{1, \ldots ,n\}$
,
$$ \begin{align} (\rho_n^2-1 )\, \sum_{k=1}^n y_{k-1}^2 = y_n^2 - 2 \sum_{k=1}^n y_{k-1} u_k- \sum_{k=1}^n u_{k}^2. \end{align} $$
Furthermore, by taking
$M=m_n$
so that
$m_n/k_n\to \infty $
and
$m_n/n\to 0$
, we may write
$$ \begin{align} \sum_{k=1}^n y_{k-1} u_k &= \sum_{k=1}^M y_{k-1} u_k + \sum_{k=M}^n \rho_n^{k-1} u_k \sum_{i=M+1}^{k-1} \rho_n^{-i} u_i + \sum_{k=M}^n \rho_n^{k-1} u_k \sum_{i=0}^M \rho_n^{-i} u_i \nonumber\\ &:=\Delta_{n1, M}+\Delta_{n2, M}+\Delta_{n3, M}. \end{align} $$
Since
$y_n=\rho _n^n\, \sum _{i=1}^n\rho _n^{-i} u_i$
and
$$ \begin{align} \frac 1 {\sigma_n}\sum_{i=1}^n\rho_n^{-i} u_i = \frac 1{\sigma_n}\sum_{i=0}^M \rho_n^{-i} u_i +o_P(1), \quad \frac 1 {\sigma_n}\sum_{k=1}^n \rho_n^{k-n} u_k=\frac 1 {\sigma_n} \sum_{k=M}^n \rho_n^{k-n} u_k+o_P(1), \end{align} $$
by using (5.6), it follows from Lemma 5.2 and the continuous mapping theorem that
where
$X, Y\sim N(0,1)$
and X and Y are independent. In terms of (5.12)–(5.15), to show (1.2), it suffices to show that
$$ \begin{align} \frac {\rho_n^{-n}}{\sigma_n^2} \,\big(\Delta_{1n, M}+\Delta_{2n, M}\big) = o_P(1). \end{align} $$
Indeed, by noting
$\sum _{k=1}^nu_k^2=O_P(n)$
and recalling
$n\rho _n^{-n}=o(\sigma _n^2)$
, simple calculation from (5.15) and (5.16) shows that
$$ \begin{align} \rho_n^{-n}\sigma_n^{-2} \sum_{k=1}^n y_{k-1} u_k = \rho_n^{-n} \sigma_n^{-2}\, \Delta_{n3, M} +o_P(1)=O_P(1) \end{align} $$
and
$ \rho _n^{-2n}\, \sigma _n^{-2} (\rho _n^2-1 )\, \sum _{k=1}^n y_{k-1}^2 =\rho _n^{-2n}\, \sigma _n^{-2} y_n^2 +o_P(1). $
Consequently, we have
$$ \begin{align*} \frac {\rho_n^{n}}{\rho_n^2-1} (\widehat \rho_n-\rho_n) &= \big[\rho_n^{-2n}\, \sigma_n^{-2} (\rho_n^2-1 )\,\sum_{k=1}^n y_{k-1}^2\big]^{-1}\,\rho_n^{-n}\sigma_n^{-2} \sum_{k=1}^n y_{k-1} u_k \nonumber\\ &= \frac { \rho_n^{-n} \sigma_n^{-2}\, \Delta_{n3, M} +o_P(1)}{\rho_n^{-2n}\, \sigma_n^{-2} y_n^2 +o_P(1)} \to_D X/Y=_D{\cal C}, \end{align*} $$
as required.
We next prove (5.16). Let
$a_{j}=\sum _{i=1}^j \rho _n^{-i}\, u_i$
and
$a_0=0$
. Note that
$$ \begin{align*} 2\Delta_{1n, M} &= \sum_{k=1}^M \rho_n^{2k} \big(a_k^2-a_{k-1}^2\big) -\sum_{k=1}^M u_k^2 \nonumber\\ &= \rho_n^{2M}a_M^2 +\sum_{k=1}^{M-1} \big(\rho_n^{2k}-\rho_n^{2k+2}\big)a_k^2-\sum_{k=1}^M u_k^2. \end{align*} $$
It follows from (5.7),
$m_n/n\to 0$
and
$n\rho _n^{-n}/\sigma _n^2\to 0$
that
i.e.,
$\Delta _{1n, M}=o_P\big (\sigma _n^2\rho _n^n\big )$
.
The proof for
$\Delta _{2n, M}=o_P\big (\sigma _n^2\rho _n^n\big )$
is more laborious. To this end, let
$ b_k=\sum _{i=k+1}^n \rho _n^{-i} u_i$
and
$\widetilde b_k=\sum _{i=M+1}^k \rho _n^{-i} u_i$
. We have
$$ \begin{align*} 2 \rho_n\, \Delta_{2n, M} &= \sum_{k=M+1}^n \rho_n^{2k} \big(\widetilde b_k^2-\widetilde b_{k-1}^2\big)-\sum_{k=M+1}^n u_k^2 \nonumber\\ &= \rho_n^{2n} \,\widetilde b_n^2 +\sum_{k=M+1}^{n-1} \big(\rho_n^{2k}- \rho_n^{2k+2}\big)\widetilde b_k^2-\sum_{k=M+1}^n u_k^2 \nonumber\\ &= \rho_n^{2{M+1}} \,\widetilde b_n^2 +\sum_{k=M+1}^{n-1} \big(\rho_n^{2k}- \rho_n^{2k+2}\big) (\widetilde b_k^2-\widetilde b_n^2)-\sum_{k=M+1}^n u_k^2. \end{align*} $$
Note that
$\rho _n^{2{M+1}} \,\widetilde b_n^2-\sum _{k=M+1}^n u_k^2=o_P\big (\sigma _n^2\rho _n^n\big )$
as in the proof of
$\Delta _{1n, M}=o_P\big (\sigma _n^2\rho _n^n\big )$
. It suffices to show that
$$ \begin{align} {\cal L}_n :=\sum_{k=M+1}^{n-1} \big(\rho_n^{2k}- \rho_n^{2k+2}\big) (\widetilde b_k^2-\widetilde b_n^2) =o_P\big(\sigma_n^2\rho_n^n\big). \end{align} $$
Let
${\cal L}_{1n}=(1-\rho _n^2)\sum _{k=M+1}^{n-1} \rho _n^{2k} \, b_k^2$
. We may write
$$ \begin{align} {\cal L}_n &=(1-\rho_n^2)\sum_{k=M+1}^{n-1} \rho_n^{2k} \big[(\widetilde b_n- b_k)^2-\widetilde b_n^2\big]\nonumber\\ &= -2\widetilde b_n\, (1-\rho_n^2)\,\sum_{k=M+1}^{n-1} \rho_n^{2k}\, b_k \, +{\cal L}_{1n}\nonumber\\ &= -2\widetilde b_n\,(1-\rho_n^2)\sum_{i=M+2}^n \rho_n^{-i} u_i \sum_{k=M+1}^{i-1}\rho_n^{2k}+{\cal L}_{1n} \nonumber\\ &=-2\widetilde b_n\,\sum_{i=M+2}^n \rho_n^{-i} u_i \big[\rho_n^{2(M+1)}-\rho_n^{2i}\big]+{\cal L}_{1n} \nonumber\\ &=-2\,\rho_n^{2(M+1)}\, \widetilde b_n \, b_{M+1} +2 \rho_n^n \, \widetilde b_n\, \widehat b_M +{\cal L}_{1n} , \end{align} $$
where
$\widehat b_M=\sum _{i=M+2}^n \rho _n^{i-n} u_i$
. Using (5.4) and (5.6), we have
$$ \begin{align*} \sigma_n^{-2}\, E |\widetilde b_n \, b_{M+1}| &\le\sigma_n^{-2}\, (E\widetilde b_n^2)^{1/2} (Eb_{M+1}^2)^{1/2} \to 0, \nonumber\\ \sigma_n^{-2}\,E |\widetilde b_n\, \widehat b_M | &\le\sigma_n^{-2}\, (E\widetilde b_n^2)^{1/2} (E \widehat b_{M}^2)^{1/2} \to 0. \end{align*} $$
On the other hand, it follows from (5.7) that, for each
$1\le k\le n$
,
$$ \begin{align*} \rho_n^{2k} Eb_k^2 = E \big(\sum_{i=1}^{n-k}\rho_n^{-i}u_{i+k}\big)^2=E \big(\sum_{i=1}^{n-k}\rho_n^{-i}u_{i}\big)^2 \le C\sigma_n^2, \end{align*} $$
indicating that
Taking these estimates into (5.19), we obtain
implying (5.18). This proves
$\Delta _{2n, M}=o_P\big (\sigma _n^2\rho _n^n\big )$
and hence completes the proof of (5.16). The proof of Theorem 2.1 is now complete.
5.4 Proof of Theorem 2.2
As noticed in Remark 4.2, (5.4)–(5.7) still hold if
$u_k=\sum _{j=0}^\infty \psi _j\epsilon _{k-j}$
is replaced by
$u_k=\epsilon _k+z_{k-1}-z_k$
, the proof following the same lines as in the proof of Theorem 2.1, and hence the details are omitted.
5.5 Proof of Proposition 2.1
Recall that
$\Gamma _n = \frac 12 {\gamma (0)}+ \sum _{j=1}^n \rho _n^{-j}\gamma (j)$
, where
$\gamma (j)=\gamma (-j)$
and
$\rho _n=1+\tau /k_n$
with
$\tau>0$
and
$ 0<k_n \to \infty $
satisfying
$\lim _{n\to \infty }n/k_n=\infty $
. We start with (2.2) and, without loss of generality, assume that
$\gamma (j)= j^{-\beta }\,l(j)$
. First note that
$l(a k_n)/l(k_n)\to 1$
for any
$0<a<\infty $
and, as
$M\to \infty $
,
$$ \begin{align*} \sum_{j=Mk_n}^n \rho_n^{-j}|\gamma(j)| &\le C\, \sum_{j=Mk_n}^n e^{-\tau j/k_n} |l(j)|j^{-\beta} \le C\, l(k_n) k_n^{-\beta} \sum_{j=Mk_n}^n e^{-\tau j/k_n} \nonumber\\ &\le Ce^{-\tau M} l(k_n) k_n^{1-\beta}=o\big[ l(k_n) k_n^{1-\beta}\big], \end{align*} $$
and
$$ \begin{align*} \sum_{j=1}^{k_n/M} \rho_n^{-j}|\gamma(j)| &\le C\, \sum_{j=1}^{k_n/M} |l(j)|j^{-\beta} \le C\, l(k_n/M) (k_n/M)^{1-\beta} \nonumber\\ &\le CM^{\beta-1} |l(k_n)| k_n^{1-\beta}=o\big[ l(k_n) k_n^{1-\beta}\big]. \end{align*} $$
It suffices to show that, for each
$M\ge 1$
,
$$ \begin{align} \frac {l^{-1}(k_n)}{\, k_n^{1-\beta}}\,\,\sum_{j=k_n/M}^{Mk_n} \rho_n^{-j}\gamma(j) \to \int_{1/M}^M e^{-\tau x} x^{-\beta}dx , \quad \text{as } n\to\infty. \end{align} $$
This is simple. Indeed, by noting
$[y]$
is right continuous where
$[y]$
is the integer part of y, we have
$\rho _n^{-[k_n x]}\to e^{-\tau x}$
and
$$ \begin{align*}\frac { l([k_n x]) [k_nx]^{-\beta} }{l(k_n) k_n^{-\beta} x^{-\beta}} \to 1,\end{align*} $$
uniformly for
$1/M\le x\le M$
. As a consequence, simple calculation shows that
$$ \begin{align*} \frac {l^{-1}(k_n)}{\, k_n^{1-\beta}} \sum_{j=k_n/M}^{Mk_n} \rho_n^{-j}\gamma(j) &= \int_{1/M}^M \rho_n^{-[k_n x]}\, \frac { l([k_n x]) [k_nx]^{-\beta} }{l(k_n) k_n^{-\beta} x^{-\beta}}\, x^{-\beta} dx + o(1) \nonumber\\ &\to \int_{1/M}^M e^{-\tau x} x^{-\beta}dx, \end{align*} $$
as required.
The proof of (2.3) is simple and hence the details are omitted.
We next prove (2.4). Since
$\sum _{j\in \mathbb {Z}} \, \gamma (j)=0$
and
$n/k_n\to \infty $
, we have
$$ \begin{align*} \gamma(0)+2 \sum_{j=1}^n \gamma(j) = -2C_\beta\, \sum_{j=n+1}^\infty j^{-1-\beta}= o(k_n^\beta). \end{align*} $$
It suffices to show that
$$ \begin{align} \widetilde \Gamma_n &:= k_n ^\beta\, \big[ \Gamma_n- \frac 12 \gamma(0)- \sum_{j=1}^n \gamma(j) \big] =k_n^{\beta}\, \sum_{j=1}^n \gamma(j)\big( \rho_n^{-j}-1\big) \nonumber\\ &\to C_\beta\, \int_{0}^\infty (e^{-\tau x}-1) \, x^{-1-\beta} dx. \end{align} $$
For
$M\ge 1$
, we may write
$$ \begin{align} \widetilde \Gamma_n &= C_\beta\, k_n^\beta\, \Big(\sum_{j=1}^{k_n/M}+\sum_{j=k_n/M+1}^{Mk_n}+\sum_{j=Mk_n+1}^n \Big)\, \big( \rho_n^{-j}-1\big) \, j^{-1-\beta}\nonumber\\ &:= \widetilde \Gamma_{1n} + \widetilde \Gamma_{2n} +\widetilde \Gamma_{3n}. \end{align} $$
As in the proof of (2.2), for each
$M\ge 1$
, we have
$$ \begin{align*} \widetilde \Gamma_{2n} \to C_\beta\int_{1/M}^M (e^{-\tau x}-1) \, x^{-1-\beta} dx, \quad \text{as } n\to\infty. \end{align*} $$
On the other hand, it is readily seen that
$$ \begin{align*} |\widetilde \Gamma_{3n}| \le C\,k_n^\beta\, \sum_{j=Mk_n+1}^n \, j^{-1-\beta} \le C/M^{\beta}. \end{align*} $$
As for
$\Gamma _{1n}$
, by noting that
$ 1-\rho _n^{-j} \le 2j\tau /k_n $
whenever M is large enough and
$1\le j\le k_n/M$
, we have
$$ \begin{align*} \widetilde \Gamma_{1n} \le C\, k_n^{-1+\beta}\, \sum_{j=1}^{k_n/M} j^{-\beta} \le CM^{\beta-1}. \end{align*} $$
Taking these estimates into (5.22), we establish (5.21) by
$n\to \infty $
first and then
$M\to \infty $
.
We finally prove (2.5). We only verify (2.5) under
$\sum _{j\in \mathbb {Z}} \, \gamma (j)=0$
and
$\gamma (j) \sim C_\beta \, j^{-1-\beta }$
for some
$0<\beta <1$
. Others are obvious and the details are omitted. In fact, under given conditions, it follows from
$\gamma (j)=\gamma (-j)$
and (2.4) that
$$ \begin{align*} \frac 12 \gamma(0)+\frac 1n \sum_{j=1}^n (n-j)\gamma(j) &\le C\,\sum_{j={n+1}}^\infty j^{-1-\beta}+\frac Cn\sum_{j=1}^n j^{-\beta} \nonumber\\ & \le C\, n^{-\beta} \le C\, (n \Gamma_n/k_n)\, (k_n/n)^{1-\beta} =o (n \Gamma_n/k_n), \end{align*} $$
as required. The proof of Proposition 2.1 is now complete.
5.6 Proof of Corollary 2.1
We only verify the condition (c) of Theorem 2.1 under C3(i) and C3(iii). In terms of Proposition 2.1, others conditions in Theorem 2.1 are obvious and hence the details are omitted. We start with the verification under C3(i) and, without loss of generality, assume that
$\gamma (j)= j^{-\beta }\,l(j)$
for
$j\ge 1$
. In fact, for each
$1\le M<n/k_n$
, standard arguments as in the proof of Proposition 2.1 yield
$$ \begin{align} R_{1M}&:=\sum_{j=Mk_n}^{n} (n-j) \rho_n^{j-n}\,j^{-\beta}|l(j)| \le C\, M^{-\beta} k_n^{-\beta}l(M\,k_n) \sum_{j=1}^{n}j \rho_n^{-j} \nonumber\\ &\le C\, M^{-\tilde \beta} k_n^{-\beta}l(k_n)\int_1^n x e^{-\tau x/k_n} dx \quad (\text{where } 0<\tilde \beta<\beta)\nonumber\\ &\le CM^{-\tilde \beta} k_n^{2-\beta}l(k_n). \end{align} $$
Similarly, by noting
$\rho _n^{M k_n}\le C\,e^{M\tau }$
, we have
$$ \begin{align} R_{2M} &:=\sum_{j=1}^{Mk_n} (n-j) \rho_n^{j-n}\,j^{-\beta}|l(j)| \le C\, n\rho_n^{-n}\, e^{M\tau}\, \sum_{j=1}^{Mk_n} j^{-\beta}|l(j)| \nonumber\\ &\le C\,M\,e^{M\tau}\, (n/k_n)\, e^{-\tau n/k_n}\, k_n^{2-\beta} l(k_n). \end{align} $$
Recall that
$\sigma _n^2 \asymp k_n^{2-\beta } l(k_n)$
under C3(i) and
$(n/k_n)\, e^{-\tau n/k_n}\to 0$
due to
$n/k_n\to \infty $
. It follows from (5.23) and (5.24) by taking
$M=M_n\to \infty $
but slow enough so that
$M\,e^{M\tau }\, (n/k_n)\, e^{-\tau n/k_n}\to 0$
that
$$ \begin{align*} \sum_{j=0}^{n} (n-j) \rho_n^{j-n}\,|\gamma(j)| \le n\rho_n^{-n}+R_{1M}+R_{2M} =o(\sigma_n^2), \end{align*} $$
as required.
We next consider the verification under C3(iii). In this case, for each
$1\le M<n/k_n$
, we have
$$ \begin{align} R_{3M}&:=\sum_{j=Mk_n}^{n} (n-j) \rho_n^{j-n}\,j^{-1-\beta} \le C\, M^{-1-\beta} k_n^{-1-\beta}\, \sum_{j=1}^{n}j \rho_n^{-j} \nonumber\\ &\le C\,M^{-1-\beta} k_n^{-1-\beta}\, \, \int_1^n x e^{-\tau x/k_n} dx \le CM^{-1} k_n^{1-\beta}, \end{align} $$
and
$$ \begin{align} R_{4M} &:=\sum_{j=1}^{Mk_n} (n-j) \rho_n^{j-n}\,j^{-1-\beta} \le C\, n\rho_n^{Mk_n-n} \nonumber\\ &\le C\,e^{M\tau}\,n\rho_n^{-n}. \end{align} $$
It follows from (5.25) and (5.26) by taking
$M=M_n\to \infty $
but slow enough so that
${e^{M\tau }}\,n\rho _n^{-n}=o(\sigma _n^2)$
, i.e,
${e^{M\tau }}\, nk_n^{\beta -1}e^{-\tau n/k_n}\to 0$
that
$$ \begin{align*} \sum_{j=0}^{n} (n-j) \rho_n^{j-n}\,|\gamma(j)| \le n\rho_n^{-n}+R_{3M}+R_{4M} =o(\sigma_n^2), \end{align*} $$
under C3(iii) and the additional condition
$n\rho _n^{-n}=o(\sigma _n^2)$
or
$nk_n^{\beta -1}e^{-\tau n/k_n}\to 0$
. The proof of Corollary 2.1 is now complete.
5.7 Proof of Theorem 3.1
Without loss of generality, assume
$y_0=0$
. We may write
$$ \begin{align*} \begin{bmatrix} \widehat \alpha_n\\ \widehat \rho_n \end{bmatrix}-\begin{bmatrix} \alpha_n\\ \rho_n \end{bmatrix} = \begin{bmatrix} n & \sum_{t=1}^ny_{t-1}\\ \sum_{t=1}^ny_{t-1} & \sum_{t=1}^ny_{t-1}^2 \end{bmatrix}^{-1}\, \begin{bmatrix} \sum_{t=1}^nu_{t}\\ \sum_{t=1}^ny_{t-1}u_t \end{bmatrix}, \end{align*} $$
indicating that
$$ \begin{align} \widehat \rho_n -\rho_n =\frac {n \sum_{k=1}^n y_{k-1} u_{k} -\sum_{k=1}^n y_{k-1}\,\sum_{k=1}^n u_{k}}{ n\sum_{k=1}^n y_{k-1}^2 -\big(\sum_{k=1}^n y_{k-1}\big)^2}. \end{align} $$
To prove (3.2), first note that
$var\big (\sum _{k=1}^n u_{k}\big )=2n \Gamma _n^*$
. This, together with the conditions that
$\sigma _n^2=\tau ^{-1}k_n\Gamma _n$
and
$k_n\Gamma _n^*=o(n \Gamma _n)$
, implies that
$$ \begin{align} \frac {k_n}{n\sigma_n}\sum_{k=1}^n u_{k} =o_P(1). \end{align} $$
On the other hand, by summing (3.1) over
$k\in \{1, \ldots ,k\}$
, we have
$$ \begin{align} \frac {\tau}{k_n}\,\sum_{k=1}^n y_{k-1} =(\rho_n-1)\,\sum_{k=1}^n y_{k-1} = y_n -n\alpha_n- \sum_{k=1}^n u_k \end{align} $$
and, for
$k\ge 1$
,
$$ \begin{align} y_k = \alpha_n \sum_{j=0}^{k-1}\rho_n + \sum_{j=1}^{k}\rho_n^{k-j}u_j =\tilde \alpha_{nk}+ \tilde y_{k} , \end{align} $$
where
$ \tilde y_k =\rho _n \tilde y_{k-1}+ u_k$
and
$\tilde \alpha _{nk}=\tau ^{-1}\,k_n\,\alpha _n (\rho _n^k-1)=l_n\sigma _n(\rho _n^k-1). $
Since
$\tilde y_n=O_P(\sigma _n\rho _n^n)$
by (5.4), it follows from (5.28)–(5.30) and
$(n/k_n)\rho _n^{-n}\to 0$
that
$$ \begin{align} \frac 1{k_n}\, \frac 1{\rho_n^n\sigma_n}\,\sum_{k=1}^n y_{k-1} &= \frac {\tilde y_n}{\rho_n^n\sigma_n}+l_n -\frac 1{k_n}\,\frac 1{\rho_n^n\sigma_n}\,\Big(n\alpha_n+ \sum_{k=1}^n u_k\Big) \nonumber\\ &=O_P(1+|l_n|). \end{align} $$
Since (5.28) and (5.31) imply that
$$ \begin{align} \frac 1n \, \frac 1{\rho_n^n\sigma_n^2}\, \sum_{k=1}^n y_{k-1}\,\sum_{k=1}^n u_{k}=o_P (1+|l_n|) \end{align} $$
and
$$ \begin{align} \frac { \rho_n^2-1 }{\rho_n^{2n}\, \sigma_n^{2}}\, \frac 1 n\,\big(\sum_{k=1}^n y_{k-1}\big)^2 &= \frac {k_n^2 (\rho_n^2-1 )}n\, \Big[\frac 1{k_n}\, \frac 1{\rho_n^n\sigma_n}\,\sum_{k=1}^n y_{k-1}\Big]^2\nonumber\\ &=o_P \big[(1+|l_n|)^2\big] \end{align} $$
due to
$\rho _n^2-1\le Ck_n^{-1}$
and
$k_n/n\to 0$
, result (3.2) will follow if we prove
$$ \begin{align} \rho_n^{-n}\, \sigma_n^{-2}\, \sum_{k=1}^n y_{k-1} u_{k} &=\Big[ \sigma_n^{-1} \sum_{i=0}^n \rho_n^{-i} u_{i} +l_n\Big]\, \sigma_n^{-1}\sum_{k=1}^n \rho_n^{k-n} u_{k} \nonumber\\ &\qquad +o_P\big(1+|l_n|\big) \end{align} $$
and
$$ \begin{align} \rho_n^{-2n}\, \sigma_n^{-2} (\rho_n^2-1 )\,\sum_{k=1}^n y_{k-1}^2 =\Big( \sigma_n^{-1}\sum_{k=1}^n \rho_n^{-k} u_{k}+l_n \Big)^2 + o_P(1+|l_n|^2){.} \end{align} $$
Indeed, if
$|l_n|\to \infty $
, it follows from (5.32)–(5.35) that
$$ \begin{align*} &\frac { l_n\,\rho_n^n}{\rho_n^2-1} \, \big(\widehat \rho_n- \rho_n\big) \nonumber\\ &= \frac {\frac {l_n^{-1}}{\rho_n^n\sigma_n^2}\, \sum_{k=1}^n y_{k-1} u_{k} -\frac 1n \, \frac {l_n^{-1}}{\rho_n^n\sigma_n^2}\, \sum_{k=1}^n y_{k-1}\,\sum_{k=1}^n u_{k}} {\frac {l_n^{-2}( \rho_n^2-1) }{\rho_n^{2n}\, \sigma_n^{2}}\, \,\sum_{k=1}^n y_{k-1}^2-\frac {l_n^{-2}( \rho_n^2-1) }{\rho_n^{2n}\, \sigma_n^{2}}\, \frac 1 n\,\big(\sum_{k=1}^n y_{k-1}\big)^2} \\ &=\frac { \sigma_n^{-1}\sum_{k=1}^n \rho_n^{k-n} u_{k} +o_P(1)}{1+o_P(1)}\to_D N(0, 1), \end{align*} $$
where we have used (5.5) in Lemma 5.2. Similarly, if
$l_n\to l$
with
$|l|<\infty $
, then
$$ \begin{align*} &\frac { \rho_n^n}{\rho_n^2-1} \, \big(\widehat \rho_n- \rho_n\big) \nonumber\\ &= \frac {\frac {1}{\rho_n^n\sigma_n^2}\, \sum_{k=1}^n y_{k-1} u_{k} -\frac 1n \, \frac {1}{\rho_n^n\sigma_n^2}\, \sum_{k=1}^n y_{k-1}\,\sum_{k=1}^n u_{k}} {\frac { \rho_n^2-1}{\rho_n^{2n}\, \sigma_n^{2}}\, \,\sum_{k=1}^n y_{k-1}^2-\frac { \rho_n^2-1 }{\rho_n^{2n}\, \sigma_n^{2}}\, \frac 1 n\,\big(\sum_{k=1}^n y_{k-1}\big)^2} \\ &=\frac {\Big( \sigma_n^{-1}\sum_{k=1}^n \rho_n^{-k} u_{k}+l \Big)\, \sigma_n^{-1}\sum_{k=1}^n \rho_n^{k-n} u_{k} +o_P(1)}{\Big( \sigma_n^{-1}\sum_{k=1}^n \rho_n^{-k} u_{k}+l \Big)^2+o_P(1)}\\ &\to_D X/(Y+l), \end{align*} $$
where X and Y are two independent standard normal variates, as required.
We next prove (5.34) and (5.35), starting with (5.34). It follows from (5.30) that
$$ \begin{align*} \sum_{k=1}^n y_{k-1} u_{k} &=\sum_{k=1}^n \tilde y_{k-1} u_{k} +\sum_{k=1}^n \tilde \alpha_{n, k-1} u_{k} \nonumber\\ &= \sum_{k=1}^n \tilde y_{k-1} u_{k} + \tau^{-1}\,k_n\,\alpha_n\,\Big[ \sum_{k=1}^n \rho_n^{k-1} u_{k}- \sum_{k=1}^n u_{k} \Big]. \end{align*} $$
Recall
$ \tilde y_k =\rho _n \tilde y_{k-1}+ u_k$
. We have
$$ \begin{align} \rho_n^{-n}\, \sigma_n^{-2}\sum_{k=1}^n \tilde y_{k-1} u_{k} =\Big[ \sigma_n^{-1} \sum_{i=0}^n \rho_n^{-i} u_{i} +l_n\Big]\, \sigma_n^{-1}\sum_{k=1}^n \rho_n^{k-n} u_{k} +o_P(1). \end{align} $$
See (5.14) and (5.17) in the proof of Theorem 2.1. This yields by using (5.28) and
$(n/k_n)\rho _n^{-n}\to 0$
that
$$ \begin{align*} \rho_n^{-n}\, \sigma_n^{-2}\, \sum_{k=1}^n y_{k-1} u_{k} &= \rho_n^{-n}\, \sigma_n^{-2}\sum_{k=1}^n \tilde y_{k-1} u_{k} +\rho_n^{-n}\, \sigma_n^{-2} \tau^{-1}\,k_n\,\alpha_n\,\Big[ \sum_{k=1}^n \rho_n^{k-1} u_{k}- \sum_{k=1}^n u_{k} \Big] \nonumber\\ &= \rho_n^{-n}\, \sigma_n^{-2}\sum_{k=1}^n \tilde y_{k-1} u_{k} + l_n\sigma_n^{-1}\, \sum_{k=1}^n \rho_n^{k-n} u_{k} +o_P( 1+|l_n| )\nonumber\\ &=\Big[ \sigma_n^{-1} \sum_{i=0}^n \rho_n^{-i} u_{i} +l_n\Big]\, \sigma_n^{-1}\sum_{k=1}^n \rho_n^{k-n} u_{k} +o_P(1+|l_n|). \end{align*} $$
That is, (5.34) holds. To prove (5.35), we observe that
$ y_k = \rho _n y_{k-1}+(u_k+\alpha _n).$
Squaring
$y_k$
and summing over
$k\in \{1, \ldots ,k\}$
yield
$$ \begin{align*} (\rho_n^2-1 )\, \sum_{k=1}^n y_{k-1}^2 &= y_n^2 - 2 \sum_{k=1}^n y_{k-1} (u_k+\alpha_n)- \sum_{k=1}^n (u_{k}+\alpha_n)^2 \\ &= y_n^2 - 2 \sum_{k=1}^n y_{k-1} u_k- 2\alpha_n \sum_{k=1}^n y_{k-1}- \sum_{k=1}^n (u_{k}+\alpha_n)^2. \end{align*} $$
In terms of (5.31), (5.34), and
$\sum _{k=1}^n (u_{k}+\alpha _n)^2=O_P\big [n(1+\alpha _n^2)\big ]$
, simple calculation shows that
$$ \begin{align*} \rho_n^{-2n}\, \sigma_n^{-2} (\rho_n^2-1 )\,\sum_{k=1}^n y_{k-1}^2 &=\big( \frac { y_n}{\rho_n^n\sigma_n}\big)^2 + o_P(1+|l_n|^2)\\ &=\Big( \sigma_n^{-1}\sum_{k=1}^n \rho_n^{-k} u_{k}+l_n \Big)^2 + o_P(1+|l_n|^2), \end{align*} $$
as required in (5.35).
5.8 Proof of Theorem 3.2
This follows from the same arguments as in the proof of Theorem 3.1, by noting that the key mid-steps (5.28), (5.31), and (5.36) do not change if we replace
${u_k=\sum _{j=0}^\infty \psi _j\epsilon _{k-j}}$
by
$u_k=\epsilon _k+z_{k-1}-z_k$
, as pointed out in the proof of Theorem 2.2.