1. Introduction
1.1. The model and the main result
Hierarchical lattices were studied in the physics literature [Reference Berker and Ostlund5, Reference Derrida, De Seze and Itzykson7, Reference Griffiths and Kaufman13, Reference Tremblay and Southern24] as lattices tailored for real-space renormalization techniques to become exact; in the absence of disorder, they allow one to calculate exactly critical points and exponents, by analysing the neighbourhood of fixed points of some dynamical systems. In the disordered case, like in the problem of directed polymers or interfaces in a random medium [Reference Clark6, Reference Derrida and Griffiths8, Reference Giacomin, Lacoin and Toninelli11] as well as their simplified versions [Reference Derrida and Retaux9, Reference Hu, Mallein and Pain16], disordered spin models [Reference Pinho, Haddad and Salinas22], or percolation clusters [Reference Hambly and Kumagai15], the hierarchical structure allows one to formulate the problem as a recursion for a probability distribution. Despite this simple and elegant formulation, these recursions are often hard to analyse, especially near transition points. This is why most models with disorder on hierarchical lattices remain unsolved.
The series–parallel random graph, investigated by Hambly and Jordan [Reference Hambly and Jordan14], is an example of a hierarchical lattice with disorder. The original motivation was to view this graph as a random environment which can be studied as a dynamical system on probability measures; see the introduction in [Reference Hambly and Jordan14] for more details.
To define the series–parallel random graph, we fix a parameter
$p\in [0, \, 1]$
, and consider two vertices denoted by
$\mathtt{a}$
and
$\mathtt{z}$
, respectively. The graph
${\mathtt{Graph}}_0(p)$
is simply composed of vertices
$\mathtt{a}$
and
$\mathtt{z}$
, as well as a (non-oriented) edge connecting
$\mathtt{a}$
and
$\mathtt{z}$
. We now define
${\mathtt{Graph}}_n(p)$
,
$n\in \mathbb N \;:\!=\; \{ 0, 1, 2, \ldots\}$
, recursively as follows:
${\mathtt{Graph}}_{n+1}(p)$
is obtained by replacing independently each edge of
${\mathtt{Graph}}_n(p)$
, either by two edges in series with probability p, or by two parallel edges with probability
$1 - p$
. See Figure 1.

Figure 1. At each step of the construction of the hierarchical lattice, each edge of the lattice is replaced by two edges in series with probability p or by two edges in parallel with probability
$1 - p$
. The left shows graph
${\mathtt{Graph}}_0(p)$
and the right shows the two possibilities for the graph
${\mathtt{Graph}}_1(p)$
.
The graph
${\mathtt{Graph}}_n(p)$
is called the series–parallel random graph of order n. See Figure 2 for an example.

Figure 2. An example of the first four graphs in the sequence
$({\mathtt{Graph}}_n(p))_{n\geq 0}$
.
Let
$D_n(p)$
denote the graph distance between vertices
$\mathtt{a}$
and
$\mathtt{z}$
on
${\mathtt{Graph}}_n(p)$
, i.e. the minimal number of edges necessary to connect
$\mathtt{a}$
and
$\mathtt{z}$
on
${\mathtt{Graph}}_n(p)$
. Since
$n\mapsto D_n(p)$
is non-decreasing, we have
$D_n(p) \uparrow D_\infty (p)\;:\!=\; \sup_{k\ge 1} D_k (p)$
(which may be infinite). It is known [Reference Hambly and Jordan14] that
$D_\infty (p)<\infty$
almost surely (a.s.) for
$p\in [0, \, \frac12)$
, and
$D_\infty (p)=\infty$
a.s. for
$p\in [\frac12, \, 1]$
. As such, there is a phase transition for
$D_n(p)$
at
$p= p_c \;:\!=\; \frac12$
. In this article we focus on the slightly supercritical case:
$p=p_c+\varepsilon$
when
$\varepsilon>0$
is sufficiently small.
Let us fix
$p\in [0, \, 1]$
for the time being. There is a simple recursive formula for the law of
$D_n(p)$
. In fact,
$D_0(p)=1$
; for
$n\in \mathbb{N}$
, by considering the two edges of
${\mathtt{Graph}}_1(p)$
, we have
where “
$\,{\buildrel \textrm{law} \over =}\,$
” denotes identity is distribution,
$\widehat{D}_n(p)$
is an independent copy of
$D_n(p)$
,
$\mathscr{E}_n$
is a Bernoulli(p) random variable
${{\mathbb P}}(\mathscr{E}_n=1) = p = 1- {{\mathbb P}}(\mathscr{E}_n=0)$
, and
$D_n(p)$
,
$\widehat{D}_n(p)$
, and
$\mathscr{E}_n$
are assumed to be independent. Equation (1.1) defines a random iterative system. There is an important literature on random iteration functions, see for example [Reference Goldstein12, Reference Jordan17, Reference Li and Rogers19, Reference Shneiberg23, Reference Wehr and Woo25]. Equation (1.1) is also an interesting addition to the list of recursive distributional equations analyzed in the seminal paper [Reference Aldous and Bandyopadhyay3].
Let us briefly explain that the limit
$\lim_{n\to \infty} \frac{1}{n} \log {{\mathbb{E}}} [D_n(p)]\;=\!:\;\alpha (p)$
exists. To this end, we fix
$m,n\in \mathbb{N}$
. By definition, at step n, there exists a certain path
$\Gamma_n$
on
${\mathtt{Graph}}_n(p)$
of length
$D_n(p)$
(i.e. number of edges) connecting vertices
$\mathtt{a}$
and
$\mathtt{z}$
. After m additional steps, edges in
$\Gamma_n$
are transformed into independent copies of
${\mathtt{Graph}}_m(p)$
. Accordingly,
$$D_{n+m} (p)\le \sum_{i=1}^{D_n} \Delta_i,$$
where, given
$D_n(p)$
, the
$\Delta_i$
are (conditionally) independent having the same law as
$D_m(p)$
. Taking expectation, we get that
${{\mathbb{E}}}[D_{n+m}(p)] \le {{\mathbb{E}}}[D_n(p)]\, {{\mathbb{E}}}[D_m(p)]$
. Since
$D_n(p)\leq 2^n$
and by the Fekete lemma on subadditive sequences, we obtain that
exists. An easy coupling also implies that
$p\mapsto \alpha(p)$
is non-decreasing on
$[0, \, 1]$
. The main result of the paper concerns the behaviour of the exponent
$\alpha(p_c + \varepsilon)$
, when
$\varepsilon>0$
in the neighbourhood of 0. Let us mention here that the results stated in this article concern the law of the
$D_n(p)$
only and that they hold true for any sequence of random variables satisfying the distributional equation (1.1).
Theorem 1. Let
$(D_n(p))_{n\in \mathbb N}$
satisfy (1.1) with
$D_0(p) = 1$
. Let
$\alpha({\cdot})$
be as in (1.2). Then
\begin{equation} \lim_{\varepsilon\to 0^+} \frac{\alpha\Big(\frac12 + \varepsilon\Big)}{\sqrt{\varepsilon}} = \frac{\pi}{\sqrt{6}}. \end{equation}
We have not been able to prove a convergence of
$\frac{1}{n}\log D_n$
when
$p\in \left(\frac12, 1\right)$
, However, we prove the following theorem which provides a partial answer as
$p\downarrow p_c = \frac12$
.
Theorem 2. Let
$(D_n(p))_{n\in \mathbb N}$
satisfy (1.1) with
$D_0(p) = 1$
. Then there exists a function
$\widetilde{\alpha}\,:\, [\frac{1}{2}, 1] \to [0, \log 2]$
such that
$\lim_{{\varepsilon}\to 0^+} \widetilde{\alpha}(p_c + {\varepsilon}) / \sqrt{{\varepsilon}}= \pi / \sqrt{6}$
and
${{\mathbb P}}$
-a.s.
Thus,
${{\mathbb P}}$
-a.s.
Proof. See Section 5.
1.2. Existing results and other problems
Considering
${\mathtt{Graph}}_n(p)$
as an electric network (as in [Reference Doyle and Snell10] or [Reference Lyons and Peres20]) and assigning a unit resistance to each edge, the effective resistance
$R_n(p)$
of
${\mathtt{Graph}}_n(p)$
from
$\mathtt{a}$
to
$\mathtt{z}$
is such that
$R_0(p)=1$
and that for each
$n\ge 0$
,
where
$\widehat{R}_n(p)$
denotes an independent copy of
$R_n(p)$
. It is known [Reference Hambly and Jordan14] that
${{\mathbb{E}}} [R_n(p)] \to \infty$
for
$p>\frac12$
and
${{\mathbb{E}}} [R_n(p)] \to 0$
for
$p<\frac12$
. As such, the effective resistance has a phase transition at
$p=\frac12$
, as for the graph distance (Hambly and Jordan [Reference Hambly and Jordan14] proved that
$p=\frac12$
is also critical for the Cheeger constant of
${\mathtt{Graph}}_n(p)$
, though we do not study the resistance nor the Cheeger constant in this article). On the other hand, it is a straightforward observation that at
$p=p_c$
,
$R_n(p_c)$
has the same distribution as
$1/R_n(p_c)$
. Hambly and Jordan [Reference Hambly and Jordan14] predicted that for all
$y\ge x>0$
,
Even though (1.6) looks quite plausible, no rigorous proof has been made available yet: it remains an open problem. Let us also mention that a more quantitative prediction has been made by Addario-Berry et al. [Reference Addario-Berry, Cairns, Devroye, Kerriou and Mitchell2] (see also [Reference Addario-Berry, Beckman and Lin1]): at
$p= p_c$
,
$\frac{|\log R_n(p_c)|}{c\, n^{1/3}}$
would converge weakly to a (shifted) beta distribution, with an appropriate but unknown constant
$c\in (0, \, \infty)$
.
For all
$p\in [0, \, 1]$
, Hambly and Jordan [Reference Hambly and Jordan14] also conjectured the existence of a constant
$\theta(p) \in \mathbb{R}$
such that
$\frac{1}{n}\log R_n (p) \to \theta(p)$
in probability.
Hambly and Jordan [Reference Hambly and Jordan14] also studied the first passage percolation problem on
${\mathtt{Graph}}_n(p)$
, which amounts to study the same recursion (1.1) as for the graph distance
$D_n(p)$
, but with a more general initial distribution instead of
$D_0(p)=1$
. As far as the distance
$D_n(p)$
is concerned, we mention that weak convergence for
$D_n(p)$
at criticality
$p=p_c \;:\!=\;\frac12$
has been investigated by Auffinger and Cable [Reference Auffinger and Cable4]. Let us also mention that the hierarchical structure in
${\mathtt{Graph}}_n(p)$
turned out to be very convenient for study of the ‘ant problem’ for reinforcement [Reference Kious, Mailler and Schapira18].
The rest of the article is structured as follows. In Section 2, we present some heuristics leading to Theorem 1, and give an outline of the proof. The lower and upper bounds in Theorem 1 are proved in Sections 3 and 4, respectively.
2. Heuristics and Description of the Approach
This aim of this short section is two-fold: the first part describes some heuristics about what led us to the conclusion in Theorem 1 and the second part provides an outline of the proof of Theorem 1.
2.1. Heuristics for Theorem 1
Let
$p\in (0, \, 1)$
. Write
By (1.1), we get, for
$k\ge 1$
,
\begin{equation} a_{n+1} (k)=p \sum_{1\leq i<k} a_n(i) a_n(k-i) + (1-p) \Bigg( 2a_n(k) \Bigg( 1-\sum_{1\leq i<k} a_n(i) \Bigg) - a_n(k)^2 \Bigg) . \end{equation}
Let
$p=p_c+\varepsilon = \frac12 + \varepsilon$
and assume that for large n and for k such that
$\log k = \mathcal{O} (\sqrt{n})$
that
$a_n(k)$
takes the following scaling form:
with an appropriate bivariate function
$(t, \, x) \mapsto f(t, \, x)$
. This scaling was already used by Auffinger and Cable [Reference Auffinger and Cable4] in the case
$p=p_c$
. Using the fact that
we show that (2.1) implies heuristically the following equation (the details of the derivation of (2.3) from (2.1) are given in Appendix A)
where
$$K \;:\!=\; \int_0^1 \frac{\log \left(\frac{1}{1-u}\right)}{u} \, \textrm{d} u = \frac{\pi^2}{6} .$$
For
$\varepsilon=0$
one can solve (2.3) exactly for an arbitrary initial condition,
$F_0(x) \equiv f(t_0,x)$
at time
$t_0>0$
. It is in fact easy to write the solution in a parametric form at arbitrary time
$t \ge t_0$
:
\begin{equation}f(t, \, x)= \sqrt{\frac{t}{t_0}} F_0 (y) ; \quad x= K \left( \sqrt{\frac{t}{t_0}} -\sqrt{\frac{t_0}{t}} \right) F_0(y) + y \sqrt{\frac{t_0}{t}} .\end{equation}
If one writes
\begin{equation}\textrm{d} x = \Bigg[ K \Bigg(\sqrt{\frac{t}{t_0}} -\sqrt{\frac{t_0}{t}}\, \Bigg) F_0'(y) + \sqrt{\frac{t_0}{t}} \Bigg] \, \textrm{d} y ,\end{equation}
and if
$F_0(y)$
vanishes at the boundaries, then one can see that
so that the normalization of
$F_0$
is conserved. However, after a finite time
$t^*>t_0$
, given by the first time for which
${\textrm{d} x}/{\textrm{d} y}$
as in (2.5) vanishes for some y i.e.
$$t^*= t_0 \Bigg[ 1- \frac1K \max_y \frac{1}{F_0'(y)} \Bigg] ,$$
the solution F of (2.4) becomes multi-valued (see Figure 3 below). Then the solution is still given by (2.4) for some ranges of x with one of several shocks as on the right of Figure 3. In the example of Figure 3, if
$f_a(t,x) > f_b(t,x) > f_c(t,x)$
are the three expressions of f(t, x) given by the parametric form (2.4) in the region
$(x_1(t), \, x_2(t))$
where
$f(t, \, x)$
is multi-valued, the true solution is
$$ f(t,x) = \begin{cases}f_a(t,x) & \quad\text{for $x_1< x < x_*$,}\\[5pt] f_c(t,x) & \quad\text{for $x_* < x <x_2$,}\end{cases}$$
with
$x_*(t)$
determined by
With this choice of
$x_*(t)$
the normalization (2.6) of f is preserved.

Figure 3. On the left, solution
$F(t,\, x)$
using the parametric form (2.4) at times
$t_0$
,
$3 t_0$
,
$5 t_0$
, and
$7t_0$
for
$F_0(z)= z \, {\textrm{e}}^{-z}$
. The same with the shock on the right.
In the long time limit, in that way from (2.4) one obtains that
\begin{equation} f(t,x) = \begin{cases}\frac{x}{K} & \quad \text{for } 0 < x < x_*(\infty),\\[5pt] 0 & \quad \text{for } x \ge x_*(\infty),\end{cases} \quad \text{with} \ x_*(\infty) \;:\!=\; \sqrt{2 K}. \end{equation}
For
$\varepsilon=0$
, this is in agreement with the description in [Reference Auffinger and Cable4].
For
$\varepsilon \ne 0$
, we did not succeed in solving (2.3) for an arbitrary initial condition. However, if one starts at
$t=0$
with the
$\varepsilon=0$
solution (2.7), one can solve (2.3) explicitly:
\begin{equation} f(t, \, x) = \frac{1}{{\textrm{e}}^{2\varepsilon t}-1} \sqrt{\frac{\varepsilon t}{K}} \sinh \Bigg( 2x \sqrt{\frac{\varepsilon t}{K}} \, \Bigg) . \end{equation}
Since
$\sum_{k=1}^\infty a_n(k)=1$
for all n, the function
$x\mapsto f(t, \, x)$
is a probability density function for all t. By analogy with the case
$\varepsilon=0$
, we assume that (2.8) is valid only for
$x\in [0, \, x_*(t)]$
, where
$x_*(t)$
is such that
$$\int_0^{x_*(t)} \frac{1}{{\textrm{e}}^{2\varepsilon t}-1} \sqrt{\frac{\varepsilon t}{K}} \sinh \Bigg( 2x \sqrt{\frac{\varepsilon t}{K}} \, \Bigg) \, \textrm{d} x = 1$$
and we take
$f(t, \, x) =0$
for
$x> x_*(t)$
. When
$t\to \infty$
, we have
This implies that for
$n\to \infty$
,
Consequently,
which leads to the statement of Theorem 1.
2.2. Outline of the proof of Theorem 1
It does not seem obvious how to turn the heuristics described in Section 2.1 directly into a rigorous argument. Our proof goes along different lines, even though the ideas are guided by the heuristics.
To prove Theorem 1, we study the following dynamical system on the set
$\mathcal{M}_1$
of all probability measures on
$\mathbb{R}_+^* \;:\!=\; (0, \, \infty)$
: we use the notation
$a\wedge b \;:\!=\; \min\{ a, \, b\}$
. Let
$p\in [0, \, 1]$
; for all
$\mu \in \mathcal{M}_1$
, we define the probability measure
$\Psi_{ p}(\mu)$
as follows:
where X,
$\widehat{X}$
, and
$\mathscr E$
are independent, X and
$\widehat{X}$
have law
$\mu$
, and
$\mathscr E$
is a Bernoulli random variable with parameter p (all the random variables mentioned in this article are defined on the same probability space
$(\Omega, \mathscr F, {{\mathbb P}})$
that is assumed to be sufficiently rich to carry as many independent random variables as needed). We observe that if
$\mu_n$
stands for the law of
$D_n(p)$
, then
$\mu_{n+1} = \Psi_{ p} (\mu_n) $
and
$\mu_0= \delta_1$
.
Let us briefly mention two basic properties of
$\Psi_{ p}$
. First,
$\Psi_{ p}$
is homogeneous; namely,
where, for all
$\theta\in \mathbb{R}_+^*$
and for any random variable X with law
$\mu \in \mathcal{M}_1$
,
$M_\theta (\mu)$
denotes the law of
$\theta X$
.
We next observe that
$\Psi_{ p}$
preserves the stochastic order
${\buildrel {\textrm st} \over \le} $
on
$\mathcal{M}_1$
that is defined as follows:
$\mu {\buildrel {\textrm st} \over \le} \nu$
if
$\mu ((x, \infty)) \leq \nu ((x, \infty))$
for all
$x\in \mathbb{R}_+^*$
(which is equivalent to the existence of two random variables X and Y whose laws are respectively
$\mu$
and
$\nu$
and such that
$X \leq Y$
,
${{\mathbb P}}$
-a.s.). Indeed, for all
$\mu, \nu \in \mathcal{M}_1$
,
In the rest of the article, it will sometimes be convenient to abuse slightly the notation by writing
$X{\buildrel {\textrm st} \over \le}Y$
for any pair of random variables X and Y to mean that the law of X is stochastically less than that of Y.
2.2.1. Strategy for the lower bound in Theorem 1
We fix
$p=\frac12+\varepsilon$
and denote by
$\mu_n$
the law of
$D_n(p)$
as defined in (1.1). As already mentioned,
$\mu_{n+1}= \Psi_{ p} (\mu_n)$
for all integers
$n\geq 0$
and
$\mu_0 = \delta_1$
. Let
$\theta$
,
$\beta \in \mathbb{R}_+^*$
, and
$n_0\in \mathbb N^*$
. Suppose that we are able to find an
$\mathcal{M}_1$
-valued sequence
$(\nu_n)_{n\geq n_0}$
such that for
$n\ge n_0$
,
We call the
$\nu_n$
the lower bound laws. For all integers
$n\geq n_0$
, let
$Y_n$
and
$U_n$
be random variables with respective laws
$\nu_n$
and
$\Psi_{ p}(\nu_{n} )$
. Then (2.12) can be rewritten as
$\theta Y_{n_0} {\buildrel {\textrm st} \over \le} D_{n_0}(p)$
and
$\beta Y_{n+1} {\buildrel {\textrm st} \over \le}U_n$
. By (2.10) and (2.11), for all
$n\ge n_0$
,
in other words,
$\theta \beta^{n-n_0} Y_{n} \, {\buildrel {\textrm st} \over \le} \, D_n(p)$
, which, in turn, implies that
Here,
$\beta$
and
${{\mathbb{E}}} [Y_n]$
turn out to be sufficiently explicit in terms of n and
$\varepsilon$
to provide a good lower bound for
${{\mathbb{E}}} [D_n(p)]$
and, thus, for
$\alpha(\tfrac12 + \varepsilon) \;:\!=\; \lim_{n\to \infty} \tfrac1n \log {{\mathbb{E}}} [D_n(p)]$
. To find appropriate lower bound laws
$\nu_n$
(i.e. laws such that (2.12) holds), we are guided by the heuristics described in Section 2.1.
Remark 1. In the proof of the lower bound in Theorem 1 in Section 3, we need to fix several parameters in (the law of) the random variable
$Y_n$
introduced in the strategy in the previous paragraph. Even though we are guided by the heuristics in Section 2.1, the choice of these parameters is not obvious, and is in fact rather delicate, resulting from a painful procedure of adjustment. A similar remark applies to the proof of the upper bound in Theorem 1.
2.2.2. Strategy for the upper bound in Theorem 1
The strategy is identical, except that the values of the parameters
$n_0$
,
$\beta$
and
$\theta$
, are different: suppose we are able to find an
$\mathcal{M}_1$
-valued sequence
$(\nu^\prime_n)_{n\geq n_0}$
such that for
$n\ge n_0$
,
we call
$\nu'_n$
the upper bound laws. For all integer
$n\geq n_0$
, let
$Z_n$
and
$V_n$
be random variables with respective laws
$\nu^\prime_n$
and
$\Psi_{ p}(\nu^\prime_{n} )$
. By (2.10) and (2.11), for all
$n\ge n_0$
,
i.e.
$D_n (p) \, {\buildrel {\textrm st} \over \le} \, \theta \beta^{n-n_0} Z_{n}$
, which, in turn, implies that
As in the lower bound,
$\beta$
and
${{\mathbb{E}}} [Z_n]$
will be sufficiently explicit in terms of n and
$\varepsilon$
to provide an upper bound for
${{\mathbb{E}}} [D_n(p)]$
and, thus, for
$\alpha(\frac12 + \varepsilon)$
. To find appropriate upper bound laws
$\nu^\prime_n$
(i.e. laws such that (2.14) holds), again we follow the heuristics described in Section 2.1.
3. Proof of Theorem 1: The Lower Bound
3.1 Definition of the lower bound laws
In this section we construct lower bound laws, i.e. an
$\mathcal{M}_1$
-valued sequence
$(\nu_n)_{n\geq 1}$
satisfying (2.12).
First note that the polynomial function
$P(\eta) = (1 - \eta)(1+\eta)^2$
is such that
$P(0) = 1$
and that
$P^\prime (0) = 1>0$
. Thus, there exists
$\eta_0\in (0, 1)$
such that
We recall that
Let
$\varepsilon\in (0, \, \tfrac12)$
. We set, for
$n\in \mathbb N$
,
The following increasing function
$\varphi_\delta\,: \, [1, \, \infty) \to \mathbb{R}_+$
plays a key role in the rest of the paper:
In the following lemma, we introduce the laws
$\nu_n$
and provide basic properties of these laws. They are shown to satisfy (2.12) in Lemma 2, the proof of which is the main technical step of the proof of the lower bound in Theorem 1. Let us mention that the notation
$\mathcal O_{ \eta, \varepsilon} (a_n)$
denotes an expression which, for fixed
$(\eta, \, \varepsilon)$
is
$\mathcal O (a_n)$
. The remark also applies to forthcoming expressions such as
$\mathcal O_{ \eta} ({\cdot})$
or
$\mathcal O_{ \delta} ({\cdot})$
.
Lemma 1. Let
$\eta\in (0, \, \eta_0)$
and let
$\varepsilon\in \left(0, \, \frac12\right)$
. Let
$\delta$
and
$(a_n)_{n\in \mathbb N}$
be defined in (3.2). Recall the function
$\varphi_\delta$
from (3.3). For all
$n\in \mathbb N$
, let
$\lambda_n \in (1, \, \infty)$
be the unique real number such that
$2a_n \varphi_{\delta} (\lambda_n)= 1$
and let
$\nu_n = \nu_n(\eta, \, \varepsilon)$
be the unique element of
$\mathcal{M}_1$
such that
Then the following hold.
-
(i) We have
$\lambda_n = a_n^{-1/\delta} \big(1+ \mathcal O_{ \eta, \varepsilon} (a_n) \big)$
as
$n\to \infty$
. -
(ii) We have
$\lim_{n\rightarrow \infty} 2a_n \varphi_{\delta} (\eta^{1/\delta} \lambda_n) = \eta$
. -
(iii) There exists
$\epsilon_1(\eta)\in \left(0, \, \frac12\right)$
such that for all
$\varepsilon \in (0, \epsilon_1(\eta))$
there is
$n_1 (\eta, \varepsilon)\in \mathbb N$
satisfying
$\lambda_{n+1} - \lambda_n + 1 \leq \delta \lambda_n$
for all
$n\geq n_1(\eta, \varepsilon)$
. -
(iv) Let
$Y_n$
be a random variable with law
$\nu_n$
. Then (3.5)
\begin{equation} {{\mathbb{E}}} [Y_n]= \delta a_n \left(\frac{\lambda_n^{1+\delta}-1}{1+\delta}-\frac{\lambda_n^{1-\delta}-1}{1-\delta} \right) .\end{equation}
Furthermore,
Proof. Recall that the inverse on
$\mathbb{R}_+$
of
$\cosh$
is the function
$\textrm{arcosh} (y)= \log (y+ \sqrt{y^2 - 1} )$
,
$y\in [1, \infty)$
and that
$\textrm{arcosh} (\frac{y}{2})= \log y -y^{-2} + \mathcal O (y^{-4})$
as
$y\to \infty$
. Thus, we get
\begin{align*} \delta \log \lambda_n &= \textrm{arcosh} \bigg( 1 + \frac{1}{2a_n}\bigg)= \log \bigg(\frac{1+2a_n}{a_n} \bigg) -\frac{a^2_n}{(1+2a_n)^2} + \mathcal O \bigg( \frac{a^4_n}{(1+2a_n)^4}\bigg)\\[5pt]&= \log \frac{1}{a_n} + \mathcal O_{ \eta, \varepsilon} (a_n) ,\end{align*}
which immediately implies (i). Since
$\varphi_{\delta} (x) \sim_{x\to \infty} \frac{1}{2} x^{\delta}$
, we get
$2a_n \varphi_{\delta} (\eta^{1/\delta} \lambda_n) \sim_{n\to \infty} \eta a_n \lambda_n^{\delta} $
which tends to
$\eta$
as
$n\to \infty$
. This proves (ii).
Let us prove (iii). Recall here that
$\eta\in (0, \eta_0)$
is fixed. To simply notation, we set
$\rho \,:\!=\, (1 - 2(1 - \widetilde{\eta}) \varepsilon)^{-1} > 1$
, so that
$a_n = \frac{1}{4}\rho^{-n}$
. It follows from (i) that
We also set
$c = 2\zeta(2)^{-1/2}$
; thus,
$\delta = c (1+ \eta) \sqrt{\varepsilon}$
. We observe that
Therefore,
and there exists
$\epsilon_1(\eta) \in (0, 1/2)$
such that for all
$\varepsilon \in (0, \epsilon_1(\eta) (\eta))$
,
$\rho^{1/\delta} - 1 < \delta $
, which, combined with (3.7), yields (iii).
Let us prove (iv). Observe that
\begin{eqnarray*}{{\mathbb{E}}} [Y_n] &\,=\,& \int_1^{\lambda_n} 2a_n \varphi^\prime_{\delta} (x) x\, \textrm{d} x= 2a_n \delta \int_1^{\lambda_n} \sinh (\delta \log x) \, \textrm{d} x \\[5pt] &\,=\,& 2a_n \delta \int_0^{\log \lambda_n } \sinh (\delta u) {\textrm{e}}^u \, \textrm{d} u = a_n \delta \int_0^{\log \lambda_n } \big( {\textrm{e}}^{u (1+ \delta) } - {\textrm{e}}^{u(1-\delta)} \big)\, \textrm{d} u ,\end{eqnarray*}
which implies (3.5). By (i), we thus get
${{\mathbb{E}}} [Y_n] \sim_{n\to \infty} (\delta/1+ \delta) a_n^{-1/\delta}$
. Since
$a_n = \frac{1}{4}\rho^{-n}$
with
$\rho = (1 - 2(1 - \widetilde{\eta}) \varepsilon)^{-1}$
, this implies that
$\lim_{n\to \infty} n^{-1}\log {{\mathbb{E}}} [Y_n] = \delta^{-1}\log \rho$
, which readily yields (3.6) by means of (3.8).
The following lemma asserts that the laws
$\nu_n$
defined in Lemma 1 satisfy the right-hand side of (2.12) with
$\beta= 1$
.
Lemma 2. Let
$\eta\in (0, \, \eta_0)$
and let
$\varepsilon\in (0, \, \frac12)$
. Let
$\delta$
and
$(a_n)_{n\in \mathbb N}$
be as in (3.2) and let
$\nu_n$
be defined by (3.4). For all
$n\in \mathbb N$
, we denote by
$Y_n$
and
$\widehat{Y}_n$
two independent random variables with common law
$\nu_n$
. Then there exists
$\varepsilon_{2}(\eta) \in (0, \, \frac12)$
such that for all
$\varepsilon\in (0, \, \varepsilon_{2}(\eta))$
, there is
$n_{2}(\eta, \, \varepsilon) \in \mathbb N$
such that for all integers
$n\geq n_{2} (\eta, \, \varepsilon)$
and all
$y\in \mathbb{R}_+^*$
,
Proof of the lower bound in Theorem 1. Let us admit Lemma 2 for the time being and prove that it implies the lower bound in Theorem 1.
Recall that
$p = \frac12 + \varepsilon$
, so (3.9) implies for all
$n\geq n_{2} (\eta, \varepsilon)$
that
$\nu_{n+1} ((y, \infty)) \leq \Psi_{ p} (\nu_n) ((y, \infty))$
for all
$y\in \mathbb{R}_+^*$
, i.e.
$\nu_{n+1} {\buildrel {\textrm st} \over \le} \Psi_{ p}(\nu_{n} ) $
. On the other hand, we note that a.s.
$Y_{n_2(\eta, \varepsilon)} \leq \lambda_{n_{2}(\eta, \varepsilon)}$
and
$1\leq D_{n_{2}(\eta, \varepsilon)} (p)$
. Thus, a.s.
$Y_{n_{2}(\eta, \varepsilon)}/\lambda_{n_{2}(\eta, \varepsilon)} \leq D_{n_{2}(\eta, \varepsilon)}(p)$
, which implies
where
$\theta \;:\!=\; 1/\lambda_{n_{2}(\eta, \varepsilon)}$
(as before,
$\mu_n$
stands for the law of
$D_n(p)$
). As such, the laws
$\nu_n$
satisfy (2.12) with
$n_0=n_{2}(\eta, \, \varepsilon)$
,
$\theta = 1/\lambda_{n_{2}(\eta, \varepsilon)}$
and
$\beta= 1$
. It follows from (2.13) that for all
$n\geq n_{2}(\eta, \varepsilon)$
,
${{\mathbb{E}}} [D_n(p)] \geq \theta\, {{\mathbb{E}}} [Y_n]$
, thus
$\alpha \big(\tfrac{1}{2} + \varepsilon \big)=\lim_{n\to \infty} (1/n) \log {{\mathbb{E}}} [D_n(p)]\geq \lim_{n\to \infty} (1/n) \log {{\mathbb{E}}} [Y_n]$
and we get
$\liminf_{\varepsilon \to 0^+} \alpha \big(\frac{1}{2} + \varepsilon \big)/ \sqrt{\varepsilon} \ge \sqrt{\zeta(2)}$
by (3.6) in Lemma 1.
3.2. Proof of Lemma 2
We fix
$\eta\in (0, \, \eta_0)$
and
$\varepsilon \in (0, \, \frac12)$
. Writing
$\textrm{LHS}_{(3.9)}$
for the expression on the left-hand side of (3.9), we need to check that
$\textrm{LHS}_{(3.9)} \ge 0$
for all large n, uniformly in
$y\in [1, \, \lambda_{n+1}]$
because this inequality is obvious if
$y\in [0, 1)$
and
$y\in (\lambda_{n+1} , \infty)$
. Let
$ n_{3} (\eta, \varepsilon)$
be such that
$\eta^{1/\delta} \lambda_n \geq 2$
for all
$n\geq n_{3}(\eta, \varepsilon)$
. Thus, to prove (3.9), we suppose that
$n\geq n_{3} (\eta, \varepsilon)$
and we consider three situations:
For the sake of clarity, these situations are dealt with in three distinct parts.
Proof of Lemma 2: first case
$y\in [1, \, \eta^{1/\delta} \lambda_n]$
. In this case, we first note that
Therefore, writing
$F_n(y) \;:\!=\; {{\mathbb P}}(Y_n\le y)$
, we obtain

For
$y\in [1, \, \eta^{1/\delta} \lambda_n]$
, we have
$y\le \lambda_n < \lambda_{n+1}$
, thus
$F_n(y) = 2a_n \varphi_\delta (y\wedge \lambda_n) = 2a_n \varphi_\delta (y)$
, and
$F_{n+1}(y) = 2a_{n+1} \varphi_\delta (y) = (1 - 2\varepsilon + \eta \delta^2) 2a_n \varphi_\delta (y)$
. Accordingly,
Observe that
$\delta^2/(2\varepsilon) = 12 (1+\eta)^2/ \pi^2 >1$
, thus
$\frac{\eta \delta^2}{2\varepsilon} > \eta$
, which equals
$\lim_{n\rightarrow \infty} 2a_n \varphi_\delta (\eta^{1/\delta} \lambda_n)$
by Lemma 1(ii). Thus, there exists
$n_{4}(\eta, \varepsilon)\geq n_{3} (\eta, \varepsilon)$
such that for all
$n\geq n_{4}(\eta, \varepsilon)$
and all
$y\in [1, \eta^{1/\delta} \lambda_n ]$
, we have
which implies that
as desired.
Proof of Lemma 2: second case
$y\in [\eta^{1/\delta} \lambda_n, \, \lambda_n]$
. In this case and in the next case, the convolution term
$Y_n+ \widehat{Y_n}$
matters more specifically; we use the following lemma to get estimates for the law of
$Y_n+ \widehat{Y_n}$
. This is a key lemma where the constant
$\zeta(2)$
appears.
Lemma 3. We keep the previous notation and we recall, in particular,
$\delta$
from (3.2). Let
$r\in (0, 1]$
. We define
For
$r \in (0, 1]$
, we also set
$\kappa(r) \;:\!=\; \sum_{k=1}^\infty \frac{r^k}{k^2} $
and for
$ x \in \big[\frac{2}{r} , \infty \big) $
,
with
Then, for
$r \in (0, 1]$
and
$x \in [{2}/{r} , \, \infty )$
,
Proof. We first prove the upper bound in (3.11). Let
$\widetilde{\varphi}_\delta(u) \;:\!=\; \varphi_\delta({\textrm{e}}^u) = \cosh(\delta u) - 1$
, for
$u \in \mathbb{R}_+$
. We fix
$r\in (0, 1]$
and
$ x \in [{2}/{r} , \infty ) $
. Since
$\widetilde{\varphi}_\delta$
is convex on
$\mathbb{R}_+$
, we get
$\widetilde{\varphi}_\delta( \log x) - \widetilde{\varphi}_\delta (\!\log (x - t)) \le (\!\log x - \log (x - t)) \widetilde{\varphi}_\delta'(\!\log x)$
for
$t\ge 0$
and
$x\ge t+1$
. Thus,
By definition,
$\widetilde{\varphi}{\,} '_{ \delta}(\!\log x) = \delta \sinh(\delta \log x)$
, and
$\varphi_\delta'(t) = \frac{\delta}{t} \sinh(\delta\log t) \le \frac{\delta}{t} \sinh(\delta\log x)$
if
$t \leq rx-1$
. Thus,
\begin{eqnarray*}\mathtt{cvl}_{\delta, r} (x) & \,\leq\, &\big( \delta \sinh(\delta \log x)\big)^2 \int_1^{rx-1} \frac{1 }{t} \log \frac{x}{x-t} \,\textrm{d} t\le \big( \delta \sinh(\delta \log x)\big)^2 \int_0^{rx} \frac{1 }{t} \log \frac{x}{x-t} \, \textrm{d} t \\[5pt] &\,=\,& \big( \delta \sinh(\delta \log x)\big)^2 \int_0^r \frac{\log \frac{1}{1 - u} }{u} \, \textrm{d} u.\end{eqnarray*}
By Fubini–Tonelli,
\begin{equation} \int_0^r \frac{\log \frac{1}{1 - u} }{u} \, \textrm{d} u = \sum_{k\ge 1} \int_0^r \frac1u \frac{u^k}{k} \,\textrm{d} u = \kappa (r) , \end{equation}
which yields the upper bound in (3.11).
We turn to the proof of the lower bound in (3.11). Since on
$\mathbb{R}_+$
, all the derivatives of
$\widetilde{\varphi}_\delta$
are positive,
$\widetilde{\varphi}_\delta$
and
$\widetilde{\varphi}_\delta'$
are convex which implies for all
$b \geq a \geq 0$
that
We suppose that
$1\leq t\leq rx-1$
. Taking
$b = \log x$
and
$a = \log (x - t)$
, we get that

where we have set

We first look at
$\textrm{RHS}^{(2)}_{(3.13)}$
. Since
$\varphi'_\delta(t) = \frac{\delta}{t} \sinh(\delta\log t) \le \frac{\delta}{t} \sinh(\delta\log x)$
, we have
We observe that
Therefore,
We now turn to
and look for a lower bound. We still assume that
$1\leq t\leq rx-1$
. Since the function
$\sinh$
is convex on
$\mathbb{R}_+$
, we have
$ \sinh(\delta\log t) \ge \sinh(\delta\log x) - \delta(\!\log x - \log t) \cosh(\delta \log x)$
. Therefore,
\begin{eqnarray*} &&\int_1^{rx-1} \delta \sinh(\delta \log t) \frac{1}{t} \big(\!\log \frac{x}{x-t} \big) \, \textrm{d} t \\[5pt] &\,\ge\,& \delta\int_1^{rx-1} \big(\!\sinh(\delta\log x) - \delta \left(\!\log \frac{x}{t}\right) \cosh(\delta \log x) \big) \frac{1}{t} \Big(\!\log \frac{x}{x-t} \Big) \, \textrm{d} t \\[5pt] &\,=\,& \delta \sinh(\delta\log x) \int_1^{rx-1} \frac{\log \frac{x}{x-t}}{t} \, \textrm{d} t - \delta^2 \cosh(\delta \log x) \int_1^{rx-1} \frac{ \big(\!\log \frac{x}{t} \big) \log \frac{x}{x-t}}{t} \, \textrm{d} t .\end{eqnarray*}
Let us look at the two integrals on the right-hand side. The second integral is easy to deal with:
The first integral is handled as follows: let
$r_1 \;:\!=\; r-\frac1x \in (0, \, r)$
. Then
$rx-1 = r_1x$
, so that
since
$\int_0^r \frac1u \log (1 - u)^{-1} \, \textrm{d} u = \kappa (r)$
by (3.12). Consequently,
This implies that

Now observe that
$ x\kappa \big(\frac1x \big) \leq \zeta(2) \leq 2$
. Together with (3.13) and (3.14), this yields the lower bound in (3.11) with
$c_0 = c_1+ c_2$
, and completes the proof of Lemma 3.
Let us now proceed to the proof of (3.9) (i.e. Lemma 2) in the second case
$y\in (\eta^{1/\delta} \lambda_n, \, \lambda_n]$
. We write

where we have set
\begin{eqnarray*} \textrm{I}_n(y) & \;:\!=\; & F_n(y)^2 - {{\mathbb P}}(Y_n + \widehat{Y}_n \le y), \\[5pt] \textrm{I I}_n(y) & \;:\!=\; & F_{n+1}(y) - F_n(y) \; \, \textrm{(which is negative),} \\[5pt] \textrm{I I I}_n(y) & \;:\!=\; & 2F_n(y) -F_n(y)^2 - {{\mathbb P}}(Y_n + \widehat{Y}_n \le y) .\end{eqnarray*}
We first look for a lower bound for
$\textrm{I}_n(y)$
. Note that
$F_n(y)^2 \ge F_n(y)\, F_n(y-1) = F_n(y)\, \int_1^{y-1} F'_n(t) \, \textrm{d} t$
and that
${{\mathbb P}}(Y_n + \widehat{Y}_n \le y) = \int_1^{y-1} F'_n(t) F_n(y-t) \, \textrm{d} t$
. Therefore,
\begin{align*} \textrm{I}_n(y)& \ge\int_1^{y-1} F'_n(t) \big( F_n(y) - F_n(y - t)\big) \, \textrm{d} t\\& = 4a_n^2 \int_1^{y-1} \varphi_\delta^\prime (t)\big( \varphi_\delta(y) - \varphi_\delta (y - t) \big) \, \textrm{d} t = 4a_n^2 \mathtt{cvl}_{\delta, 1} (y) ,\end{align*}
where
$\mathtt{cvl}_{\delta, 1} $
is defined in Lemma 3. By the first inequality in (3.11) of Lemma 3,
$\mathtt{cvl}_{\delta, 1}(y) \geq \kappa_{\delta, 1} (y) (\delta \sinh (\delta \log y))^2$
for
$y \in [2, \infty)$
. Since
we have
$\mathtt{cvl}_{\delta, 1} (y) \geq \kappa_{\delta, 1} (y) \delta^2 \varphi_\delta (y)^2$
. Thus,
$\textrm{I}_n(y) \geq \kappa_{\delta, 1} (y) \delta^2 F_n (y)^2$
. Since the function
$\coth$
decreases to 1 and since
$ \kappa_{\delta, 1}$
is increasing on
$[2, \, \infty)$
, we get, for
$y\in [\eta^{1/\delta} \lambda_n, \, \lambda_n]$
,
\begin{align*} & \kappa_{\delta, 1}(y) \geq \kappa_{\delta, 1} (\eta^{1/\delta} \lambda_n)\\ &\quad = \kappa \bigg( 1 - \frac{1}{\eta^{1/\delta} \lambda_n} \bigg) -\frac{2}{\eta^{1/\delta} \lambda_n} -c_0 \delta \coth (\delta \log (\eta^{1/\delta} \lambda_n)) \underset{n\to \infty}{-\!\!\! -\!\!\! -\!\!\!\longrightarrow} \zeta(2) -c_0\delta ,\end{align*}
because
$\kappa(1)=\zeta(2)$
. By definition,
$\delta = 2\zeta(2)^{-1/2}(1+\eta)\sqrt{\varepsilon}$
, thus
Let
$\varepsilon_{3} (\eta) \in (0,\, \frac12)$
be such that for
$\varepsilon \in (0, \,\varepsilon_{3} (\eta))$
,
Therefore, there exists an integer
$n_{5} (\eta, \varepsilon) \geq n_{4} (\eta, \varepsilon) $
such that
On the other hand, since
$a_{n+1}/a_n = 1-2 {\varepsilon} (1-\widetilde{\eta} )$
, we get
\begin{eqnarray*} \textrm{I I}_n(y) & \,=\, & 2(a_{n+1} - a_n) \varphi_{\delta} (y) = - 2(1 - \widetilde{\eta})\varepsilon F_n(y), \\[5pt] \textrm{I I I}_n(y) & \,=\, & 2(F_n(y) - F_n(y)^2) + \textrm{I}_n(y) \ge 2(F_n(y) - F_n(y)^2) .\end{eqnarray*}
Thus, for all
$\varepsilon \in (0, \varepsilon_{3} (\eta))$
, all integers
$n\geq n_{5}(\eta, \varepsilon) $
and all
$y\in [\eta^{1/\delta} \lambda_n, \, \lambda_n]$
,
This completes the proof of Lemma 2 in the second case
$y\in [\eta^{1/\delta} \lambda_n, \, \lambda_n]$
.
Proof of Lemma 2: third and last case
$y\in (\lambda_n, \, \lambda_{n+1}]$
. Here again the law of
$Y_n+ \widehat{Y}_n$
matters specifically and we use Lemma 3 to handle it. Let us fix
$\varepsilon \in (0, \varepsilon_{3} (\eta))$
and
$n \geq n_{5}(\eta, \varepsilon) $
. We first observe that
Since
$F_{n+1}(\lambda_n) =\big(1 - 2(1 - \widetilde{\eta})\varepsilon \big) F_n(\lambda_n) = 1 - 2(1 - \widetilde{\eta})\varepsilon$
, we get for all
$y\in (\lambda_n, \, \lambda_{n+1}]$
that
We now look for a lower bound for
${{\mathbb P}}(Y_n + \widehat{Y}_n > \lambda_{n+1})$
. By Lemma 1(iii), there is
$\varepsilon_{4} (\eta) \in (0, \varepsilon_{3} (\eta))$
such that for
$\varepsilon \in (0, \varepsilon_{4} (\eta))$
, there exists an integer
$n_{6} (\eta, \varepsilon) \geq n_{5} (\eta, \varepsilon)$
which satisfies
$\lambda_{n+1} - \lambda_n+1\le \delta \lambda_n$
, for all
$n\geq n_{6}(\eta, \varepsilon)$
. We have
\begin{eqnarray*} {{\mathbb P}} \big( Y_n + \widehat{Y}_n > \lambda_{n+1} \big) &\ge & {{\mathbb P}} \big( \lambda_n - 1 > Y_n > \lambda_{n+1} - \lambda_n ; \, Y_n + \widehat{Y}_n > \lambda_{n+1} \big) \\[5pt] &\,=\,& \int_{\lambda_{n+1}-\lambda_n}^{\lambda_n-1} F'_n(t) \big( 1 - F_n(\lambda_{n+1} - t)\big) \, \textrm{d} t .\end{eqnarray*}
Writing
$1-F_n(\lambda_{n+1}-t) = F_n(\lambda_n) -F_n(\lambda_{n+1}-t)$
, this leads to

where:
-
•
; -
•
; -
•
.
We apply Lemma 3 to
to get that

Note that
$\lim_{n\to \infty} \kappa_{\delta, 1} (\lambda_n)\delta^2 = 4(1+\eta)^2\varepsilon - 8c_0\zeta(2)^{-3/2} (1+\eta)^2 \varepsilon^{3/2}$
. Therefore there exists
$\varepsilon_{5} (\eta) \in (0, \varepsilon_{4} (\eta))$
such that for all
$\varepsilon \in (0, \varepsilon_{5} (\eta))$
there is an integer
$n_{7} (\eta, \varepsilon) \geq n_{6} (\eta, \varepsilon)$
satisfying
Let us next provide an upper bound for
. For
$n\geq n_{7} (\eta, \varepsilon)$
, we have
$\lambda_{n+1} - \lambda_n+1\le \delta \lambda_n$
; thus

Therefore, there exists
$\varepsilon_{6} (\eta) \in (0, \varepsilon_{5} (\eta))$
such that for all
$\varepsilon \in (0, \varepsilon_{6} (\eta))$
, there is an integer
$n_{8} (\eta, \varepsilon)\geq n_{7} (\eta, \varepsilon)$
satisfying
We finally look for an upper bound for
. Since
$\varphi'_\delta(t)=\frac{\delta}{t}\sinh(\delta \log t)$
, we have

Since
$\lambda_{n+1} - \lambda_n +1 \leq \delta \lambda_n$
, we get
$\lambda_{n+1} \leq (1+ \delta) \lambda_n $
, and thus

Thus, there exists
$\varepsilon_{7} (\eta) \in (0, \varepsilon_{6} (\eta))$
such that for all
$\varepsilon \in (0, \varepsilon_{7} (\eta))$
, there is an integer
$n_{9} (\eta, \varepsilon)\geq n_{8} (\eta, \varepsilon)$
such that
Combined with (3.20) and (3.21), it entails that

Returning to (3.18), we obtain
$\textrm{LHS}_{(3.9)}\ge2\varepsilon - 2(1 - \widetilde{\eta})\varepsilon= 2\widetilde{\eta}\varepsilon> 0 ,$
which proves Lemma 2 in the third and last case
$y\in (\lambda_n, \, \lambda_{n+1}]$
.
4. Proof of Theorem 1: The Upper Bound
Compared with the previous section, in the following proof, the numbering for the constants
$n_i(\eta, \, \varepsilon)$
,
$n_{i+1}(\eta, \, \varepsilon)$
,
$\ldots$
and
$\varepsilon_i (\eta)$
,
$\varepsilon_{i+1}(\eta)$
,
$\ldots$
starts again from
$i=1$
.
4.1. Definition of the upper bound laws
Recall from (3.3) that for all
${q}\in (0, \, 1)$
and all
$x\in [1, \infty)$
, we have set
$\varphi_{q}(x) \;:\!=\; \cosh({q} \log x) - 1$
. The following lemma gives a list of properties of the function
$\varphi_{q}$
that are used to define the forthcoming laws
$\nu_n^\prime$
. These laws are proved to satisfy (2.14); see Lemma 6 that is the key technical step in the proof of the upper bound in Theorem 1.
Lemma 4. Let
${q} \in (0,\, 1)$
.
-
(i) We have
$\varphi_{{q}}^\prime ([1, \infty))= [0, \, M_{q}]$
with
$M_{q} \;:\!=\; \sup_{y \in [1, \, \infty)} \varphi^\prime_{q} (y)$
, and is the unique
$$ x_{{q}} \;:\!=\; \bigg(\frac{1+{q}}{1-{q}} \bigg)^{1/(2{q})} $$
$x\in [1, \, \infty)$
such that
$\varphi_{{q}}^\prime (x)= M_{q}$
. Moreover,
$x_{q} \to {\textrm{e}}$
and
$M_{q}\sim {\textrm{e}}^{-1}{q}^2$
as
${q} \to 0^+$
.
-
(ii) The function
$\varphi_{{q}}^\prime\,:\, [1, x_{q}] \to [0, \, M_{q}]$
is a
$C^1$
increasing bijection whose inverse is denoted by
$\ell_{q}\,:\, [0, \, M_{q}] \to [1, \, x_{q}]$
and
$\varphi_{{q}}^\prime\,:\, [ x_{q}, \, \infty) \to (0, \, M_{q}]$
is a
$C^1$
decreasing bijection whose inverse is denoted by
$r_{q}\,:\, (0, \, M_{q}] \to [x_{q}, \, \infty)$
. As
$y \to 0^+$
, we get
$$\ell_{q} (y) = 1+ {q}^{-2} y (1+ \mathcal O_{q} (y))\quad \textrm{and} \quad r_{q} (y)\sim (2y/{q})^{-\frac{1}{1-{q}}}.$$
-
(iii) For all
$y\in (0, M_{q}]$
, we set
$\Phi_{q} (y) =\varphi_{q} (r_{q} (y)) - \varphi_{q} (\ell_{q} (y))$
. Then
$\Phi_{q}\,:\, (0, \, M_{q}] \to \mathbb{R}_+$
is a
$C^1$
decreasing bijection whose inverse is denoted by
$\Phi_{q}^{-1}\,:\, \mathbb{R}_+ \to (0, \, M_{q}]$
. As
$x \to \infty$
, we get
$ \Phi_{q}^{-1} (x) \sim ({q}/2) (2x)^{-\frac{1-{q}}{{q}}}$
,
$$r_{q} \big( \Phi_{q}^{-1} (x) \big) \sim (2x)^{\frac{1}{{q}}} \quad and \quad \ell_{q} \big( \Phi_{q}^{-1} (x)\big) = 1+ \frac{1}{2{q}} (2x)^{-\frac{1-{q}}{{q}}}\bigg(1+ \mathcal O_{q} \bigg( x^{-\frac{1-{q}}{{q}}} \bigg) \bigg). $$
-
(iv) Let
$a\in \mathbb{R}_+^*$
and for all
$x\in [1, \, \infty)$
set
$g(x)= \varphi_{{q}} (x+a) - \varphi_{{q}} (x)$
. Then,
$\lim_{x\to \infty} g(x)= 0$
. Moreover, suppose that there is
$x^*\in [1, \, \infty)$
such that
$g^\prime (x^*) = 0$
. Then, g is strictly decreasing on
$[x^*, \, \infty)$
.
Proof. See Appendix B.
We fix
$\eta\in (0, \, 1)$
and
${\varepsilon} \in \big(0, \, \tfrac{1}{36}\big)$
, and write
We keep the notation in Lemma 4, and set
Then
and these two equations characterize
$\sigma_n$
and
$\tau_n$
. By Lemma 4(ii) and (iii),
$\sigma_n$
decreases to 1,
$\tau_n$
increases to
$\infty$
, and
By Lemma 4(iv),
$x\in [\sigma_n ,\, \infty) \mapsto2b_n (\varphi_\gamma(x+ \tau_n) - \varphi_\gamma (x))$
decreases to 0. Therefore, there is a unique measure
$\nu^\prime_n\in \mathcal{M}_1$
such that
In the rest of this section,
$Z_n$
denotes a random variable with law
$\nu^\prime_n$
, and
Lemma 5. We keep the previous notation. There exists
$\varepsilon_{1} (\eta) \in \Big(0, \, \frac{1}{36}\Big)$
such that for all
$\varepsilon \in (0, \, \varepsilon_{1} (\eta))$
, there exists
$n_{1}(\eta, \, \varepsilon) \in \mathbb N$
satisfying
${{\mathbb{E}}} [Z_n \wedge \widehat{Z}_n] \le 2\tau_n$
for all
$n\ge n_{1}(\eta, \varepsilon)$
, where
$\widehat{Z}_n$
is an independent copy of
$Z_n$
.
Proof. Observe that
For all sufficiently large n,
$\tau_n\geq \sigma_n$
, so that for
$x\in [\tau_n , \, \infty)$
,
\begin{align*}{{\mathbb P}} (Z_n > x) &= 2b_n \big(\varphi_\gamma (x+ \tau_n) - \varphi_\gamma (x) \big) \leq b_n \big( (x + \tau_n)^{\gamma} - x^{\gamma}\big) \\[5pt]&= \gamma b_n \int_{x}^{x+\tau_n} y^{-(1- \gamma) } \, \textrm{d} y \leq \gamma b_n \tau_n x^{-(1- \gamma)}.\end{align*}
Since
$\gamma \to 0$
as
$\varepsilon \to 0$
, we choose
$\varepsilon_{1} (\eta) \in \Big(0, \, \tfrac{1}{36} \Big)$
such that for all
${\varepsilon} \in (0, \varepsilon_{1} (\eta))$
, we have
$2 (1 - \gamma) > 1$
(i.e.
$1 - 2\gamma > 0$
) and that
$\gamma^2 / (1 - 2\gamma) < 1$
. Therefore,
by (4.3). Thus, for all
$\varepsilon \in (0, \varepsilon_{1} (\eta))$
, there is
$n_{1} (\eta, \varepsilon)\in \mathbb N$
such that
$ \int_{\tau_n}^\infty{{\mathbb P}}(Z_n >x)^2 \, \textrm{d} x < \tau_n$
for all
$n\geq n_{1} (\eta, \varepsilon)$
, which entails the desired inequality.
The following lemma, which is the key point in the proof of the upper bound in Theorem 1, tells us that the laws
$\nu^\prime_n$
satisfy (2.14). Its proof is postponed to Section 4.2. The proof of the upper bound in Theorem 1 relies on this lemma, as well as on the arguments in Section 2.2 and on Lemma 5.
Lemma 6. We keep the previous notation. There exists
$\varepsilon_{2} (\eta) \in (0, \, \varepsilon_{1} (\eta))$
such that for all
$\varepsilon \in (0, \, \varepsilon_{2} (\eta))$
, there is an integer
$n_{2} (\eta, \varepsilon) \geq n_{1} (\eta, \varepsilon) $
satisfying, for
$n\geq n_{2} (\eta, \varepsilon)$
and
$z \in \big[(1 + \eta \gamma) \sigma_{n+1}, \, \infty \big)$
,
where
$\widehat{Z}_n$
is an independent copy of
$Z_n$
.
Proof. See Section 4.2.
Proof of the upper bound in Theorem 1. We recall that
$p=\tfrac{1}{2}+ {\varepsilon}$
. We admit Lemma 6 and prove that it implies the upper bound in Theorem 1. We keep the previous notation. Fix
$\eta \in (0,\, 1)$
and
$\varepsilon \in (0, \, \varepsilon_{2} (\eta) )$
. Note that a.s.
$D_{n_{2} (\eta, \varepsilon) } (p)\leq 2^{n_{2} (\eta, \varepsilon)} \leq 2^{n_{2} (\eta, \varepsilon)} Z_{n_{2} (\eta, \varepsilon)}$
because
$Z_n \geq \sigma_n \geq 1$
. Then
$\mu_{n_0} \, {\buildrel {\textrm st} \over \le} \, M_\theta (\nu^\prime_{n_0}) $
with
$n_0= n_{2} (\eta, \varepsilon)$
and
$\theta = 2^{n_{2} (\eta, \varepsilon)}$
.
For
$n\geq n_{2} (\eta, \, \varepsilon)$
and
$z \in [0, \, (1 + \eta \gamma)\sigma_{n+1} )$
, we have
Combined with (4.5) that holds for
$z \ge (1 + \eta \gamma) \sigma_{n+1}$
, this implies that
$\Psi_{p} (\nu^\prime_{n}) \, {\buildrel {\textrm st} \over \le}\, M_{1+ \eta \gamma} (\nu_{n+1}^\prime)$
. Thus, the laws
$\nu_n^\prime$
satisfy (2.14) with
$\beta = 1+ \eta \gamma$
. By (2.15),
We denote by
$\widehat{D}_n (p)$
(respectively,
$\widehat{Z}_n$
) an independent copy of
$D_n(p)$
(respectively, of
$Z_n$
). Then for
$n\ge n_{2} (\eta, \varepsilon)$
,

Iterating the inequality yields that for
$n\ge n_0 \;:\!=\; n_{2} (\eta, \, \varepsilon)$
and using the fact that
$j\mapsto \tau_j$
is non-decreasing, we get
\begin{eqnarray*} {{\mathbb{E}}}[D_n(p)] &\,\le\,& (1+2\varepsilon)^{n-n_0} {{\mathbb{E}}}[D_{n_0} (p)] + \sum_{i=0}^{n-n_0-1} (1+2\varepsilon)^i 2^{n_0+1} (1+ \eta \gamma)^{n-i-1-n_0} \tau_{n-i-1} \\[5pt] &\,\le\,& (1+2\varepsilon)^{n-n_0} {{\mathbb{E}}} [D_{n_0} (p)] + n (1+2\varepsilon)^n + 2^{n_0+1} (1+ \eta \gamma)^n \tau_n .\end{eqnarray*}
By (4.3), for
$\varepsilon \in (0, \, \varepsilon_{2} (\eta))$
,
\begin{eqnarray*} \alpha \bigg(\frac{1}{2} + \varepsilon \bigg) &\,=\, & \lim_{n\rightarrow \infty} n^{-1} \log {{\mathbb{E}}} [D_n(p)] \\[5pt] &\,\leq\, & \log (1 + 2\varepsilon)+ \log (1 + \eta \gamma)- \frac{1}{\gamma} \log \big(1 - (2 + \eta) \varepsilon \big) \\[5pt] &\,=\,& \Bigg( \frac{2 \eta (1-\eta)}{\sqrt{\zeta(2)}} +\frac{1 + \tfrac12 \eta}{1 - \eta} \sqrt{\zeta(2)} \, \Bigg) \sqrt{\varepsilon}+ {\mathcal O}_{ \eta} (\varepsilon),\end{eqnarray*}
which implies the upper bound in Theorem 1, as
$\eta>0$
can be made as small as possible. Provided Lemma 6 holds true, it also completes the proof of Theorem 1.
4.2. Proof of Lemma 6
This section is devoted to the proof of Lemma 6. We fix
$\eta \in (0, \, 1)$
and
$\varepsilon \in (0, \, \varepsilon_{1} (\eta))$
. Recall from (4.1) the definition of
$\gamma$
and
$b_n$
and from (4.2) the definition of
$\sigma_n$
and
$\tau_n$
. By (4.3),
$\lim_{n\to \infty} \sigma_n = 1 < 1+ \eta \gamma = \lim_{n\to \infty} (1 + \eta \gamma ) \sigma_{n+1} < \lim_{n\to \infty} (\eta \varepsilon)^{1/\gamma} \tau_n = \infty$
. Therefore, there is an integer
$n_{3} (\eta, \, \varepsilon) \geq n_{1} (\eta,\, \varepsilon) $
such that
Writing
$\textrm{LHS}_{(4.5)}$
for the expression on the left-hand side of (4.5), we need to check that
$\textrm{LHS}_{(4.5)} \le 0$
for all small enough
$\varepsilon >0$
, all sufficiently large integer n and all
$z \ge (1+\eta \gamma) \sigma_{n+1}$
. This is done by distinguishing four cases:
\begin{eqnarray*}\textbf{Case 1:} \ (1+\eta \gamma) \sigma_{n+1} \leq z <(\eta \varepsilon)^{1/\gamma} \tau_n , & \quad & \textbf{Case 2:} \ (\eta \varepsilon)^{1/\gamma} \tau_n \leq z < \gamma^2 \tau_n, \\[5pt] \textbf{Case 3:} \ \gamma^2 \tau_n \le z < {\textrm{e}}^{1/\sqrt{\gamma}} \tau_n, & \quad\textrm{and}\quad & \textbf{Case 4:} \ z \geq {\textrm{e}}^{1/\sqrt{\gamma}} \tau_n. \end{eqnarray*}
Proof of Lemma 6: first case
$(1+\eta \gamma) \sigma_{n+1} \le z < (\eta \varepsilon)^{1/\gamma} \tau_n$
. Here
$\varepsilon \in (0, \varepsilon_{1} (\eta))$
and
$n\geq n_{3} (\eta, \varepsilon)$
. We use the trivial fact
${{\mathbb P}} \big( Z_n+\widehat{Z}_n > z\big) \le 1$
, so that

Recall the notation
$G_n(z) \;:\!=\; {{\mathbb P}} (Z_n \leq z)$
. Then

We have used the fact (by (4.7)) that
$z\ge \sigma_n$
in the last inequality. Since
$\varphi_\gamma(z) \le \cosh (\gamma \log z)$
and
$z < (\eta \varepsilon)^{1/\gamma} \tau_n$
, we get that
Thus, there exists
$n_{4} (\eta, \, \varepsilon) \geq n_{3} (\eta, \, \varepsilon)$
such that
$G_n (z) < 2\eta\varepsilon$
for all
$n\geq n_{4} (\eta, \, \varepsilon)$
and for all
$z\in [(1+\eta \gamma) \sigma_{n+1}, (\eta \varepsilon)^{1/\gamma} \tau_n]$
. Thus, we get
By (4.8), this leads to

We claim for all sufficiently small
$\varepsilon $
and large n, and all
$z\in [(1+\eta \gamma) \sigma_{n+1}, (\eta \varepsilon)^{1/\gamma} \tau_n]$
, that
\begin{align} \varphi_\gamma(z+\tau_n) - \varphi_\gamma(\sigma_n+\tau_n) &\leq \frac{1}{2}\bigg(\varphi_\gamma(z) - \varphi_\gamma\bigg( \frac{z}{1+\eta \gamma}\bigg) \bigg) \quad \textrm{and}\nonumber \\\varphi_\gamma(\sigma_n) & \leq \frac{1}{2}\bigg(\varphi_\gamma(z) - \varphi_\gamma\bigg( \frac{z}{1+\eta \gamma}\bigg) \bigg) ,\end{align}
which will readily imply Lemma 6 in the first case.
To check the second inequality in (4.10), we look for a suitable upper bound for
$\varphi_\gamma (\sigma_n) \;:\!=\; \cosh(\gamma \log \sigma_n)-1$
. We note that
$\cosh (\lambda) -1 \leq 2 \lambda^2$
for
$\lambda \in [0, \, 2]$
(indeed, since
$\cosh (2) < 4$
, the second derivative of
$\lambda \in [0, \, 2] \mapsto \cosh (\lambda )-1 - 2 \lambda^2 $
is negative, so is the derivative, and the function itself is decreasing). Since
$\sigma_n \to 1^+$
as
$n \to \infty$
, there exists
$n_{5} (\eta, \, \varepsilon) \geq n_{4}(\eta, \, \varepsilon)$
such that
$\gamma \log \sigma_n \in [0, \, 2]$
for
$n\geq n_{5} (\eta, \, \varepsilon)$
; accordingly,
since
$\log x\leq x-1$
for
$x\ge 1$
.
We now look for lower bounds for
$\varphi_\gamma(z) - \varphi_\gamma\left(\frac{z}{1+\eta \gamma}\right)$
. Using the formula
$\cosh (a) - \cosh (b) = 2 \sinh\left( \frac{a + b}{2}\right) \sinh \left(\frac{a - b}{2}\right)$
, we get that
Since
$z\in [(1+\eta \gamma) \sigma_{n+1}, (\eta \varepsilon)^{1/\gamma} \tau_n]$
, observe that
Since
$\sinh (x) \geq x$
(for
$x\in \mathbb{R}_+$
) and
$\log (1+ x)\geq \frac{1}{2} x$
(for
$x\in [0, \, 1]$
), we have
\begin{eqnarray} \varphi_\gamma(z) - \varphi_\gamma \bigg( \frac{z}{1+\eta \gamma} \bigg) &\,\ge\,& \gamma \, [\!\log (1 + \eta \gamma)] \sinh \bigg( \gamma \log z - \frac{\gamma}{2}\log (1 + \eta \gamma) \bigg) \nonumber \\[5pt] &\,\ge\,& \frac{1}{2}\eta \gamma^2 \sinh \bigg( \gamma \log z - \frac{\gamma}{2}\log (1 + \eta \gamma) \bigg) \end{eqnarray}
Since
$2\gamma^2 (\sigma_n - 1)^2 \to 0$
as
$n \to \infty$
, there exists
$n_{6} (\eta, \, \varepsilon) \ge n_{5} (\eta,\, \varepsilon)$
such that
$2\gamma^2 (\sigma_n - 1)^2 < \frac{1}{8}\eta \gamma^3\log (1 + \eta \gamma)$
for
$n \geq n_{6} (\eta, \, \varepsilon)$
; in view of (4.11), this implies the desired second inequality in (4.10):
We now turn to the proof of the first inequality in (4.10). Observe that
\begin{eqnarray}\varphi_\gamma(z+\tau_n) - \varphi_\gamma(\sigma_n+\tau_n) &\,=\,& \gamma \int_{\sigma_n+ \tau_n}^{z+ \tau_n} \sinh (\gamma \log x) \frac{\textrm{d} x}{x} \nonumber \\[5pt] &\,\leq\, &\frac{\gamma}{2} \int_{\sigma_n+ \tau_n}^{z+ \tau_n} \frac{\textrm{d} x}{x^{1-\gamma}} \leq \frac{\gamma (z- \sigma_n)}{2(\sigma_n+ \tau_n)^{1-\gamma}} \leq\frac{\gamma (z- 1)}{2\tau_n^{1-\gamma}}, \end{eqnarray}
since
$\sigma_n \geq 1$
. Consequently, for
$z\in \big[ (1 + \eta \gamma)\sigma_{n+1}, \, {\textrm{e}}^{2/\gamma} \big)$
,
Thus, by (4.14), there is
$n_{7} (\eta,\, \varepsilon) \geq n_{6} (\eta,\, \varepsilon)$
such that for
$n\geq n_{7} (\eta,\, \varepsilon)$
and
$z\in \big[ (1 + \eta \gamma)\sigma_{n+1}, \, {\textrm{e}}^{2/\gamma} \big)$
,
To complete the proof of the first inequality in (4.10), we still need to consider
$z\in [{\textrm{e}}^{2/\gamma}, \, (\eta\varepsilon)^{1/\gamma} \tau_n)$
(with
$n\geq n_{7}(\eta,\varepsilon )$
). Noting that
$\frac{\gamma}{2}\log (1 + \eta \gamma) < 1$
, we have
$\gamma \log z - \frac{\gamma}{2}\log (1 + \eta \gamma) > \gamma \log z - 1 \geq 1$
. By (4.13), and writing
$c_0 \;:\!=\; \inf_{x\in [2, \, \infty) } [{\textrm{e}}^{-x} \sinh(x-1)]$
,
Since
${\textrm{e}}^{\gamma \log z} = \frac{z}{z^{1-\gamma}} \ge \frac{z-1}{((\eta\varepsilon)^{1/\gamma}\tau_n)^{1-\gamma}} = \frac{1}{\gamma (\eta\varepsilon)^{(1-\gamma)/\gamma}} \, \frac{\gamma (z-1)}{\tau_n^{1-\gamma}}$
, this implies that
with
$c_1 \;:\!=\; \frac{2}{\sqrt{\zeta(2)}} c_0$
. Since
$\frac{c_1 (1 - \eta)}{\eta^{\frac{1}{\gamma}-2} \, \varepsilon^{\frac{1}{\gamma} - \frac{3}{2}}} \to \infty$
as
$\varepsilon \to 0^+$
, there exists
$\varepsilon_{3} (\eta) \in (0, \varepsilon_{1} (\eta))$
such that
$\frac{c_1 (1 - \eta)}{\eta^{(1/\gamma)-2} \, \varepsilon^{1/\gamma - 3/2}} >2$
for
$\varepsilon \in (0, \, \varepsilon_{3} (\eta))$
and there exists
$n_{8} (\eta,\, \varepsilon)\geq n_{7} (\eta, \, \varepsilon)$
such that for
$n \geq n_{8} (\eta, \, \varepsilon)$
and
$z\in [{\textrm{e}}^{2/\gamma} , \, (\eta\varepsilon)^{1/\gamma} \tau_n)$
,
This yields (4.10) and, thus, implies Lemma 6 in the first case
$(1+\eta \gamma) \sigma_{n+1} \le z < (\eta \varepsilon)^{1/\gamma} \tau_n$
.
Proof of Lemma 6: second case
$(\eta \varepsilon)^{1/\gamma} \tau_n \le z < \gamma^2 \tau_n$
. We first prove the following lemma.
Lemma 7. We keep the previous notation. For
$0 < a < b < \infty$
, we have
Proof. We remind the reader that
$\lim_{n\to \infty} b_n\tau_n^\gamma=1$
and that
$\lim_{n\to \infty} b_n\tau_n^{-\gamma}=0$
. Then, for
$\theta \in [a, \, b]$
, we have
\begin{align*} G_n (\theta \tau_n) &= 2b_n \big( \varphi_\gamma(\tau_n \theta) - \varphi_\gamma(\sigma_n) - \varphi_\gamma(\tau_n (1+\theta)) + \varphi_\gamma(\sigma_n+\tau_n) \big) \\[5pt] & = b_n\tau_n^{\gamma} \theta^{\gamma} + b_n\tau_n^{-\gamma} \theta^{-\gamma} -2b_n -2b_n \varphi_{\gamma} (\sigma_n) + b_n\tau_n^{\gamma} \big( 1 - (1+ \theta)^{\gamma} \big)- b_n\tau_n^{-\gamma} (1+ \theta)^{-\gamma} \\[5pt] & \quad + b_n\tau_n^{\gamma} \big((1+ \sigma_n \tau_n^{-1})^{\gamma}-1 \big) + b_n\tau_n^{-\gamma} (1+ \sigma_n\tau^{-1}_n)^{-\gamma} \\[5pt] & = b_n\tau_n^{\gamma} \theta^{\gamma}+ b_n\tau_n^{\gamma} \big( 1 - (1+ \theta)^{\gamma} \big) + b_n\tau_n^{-\gamma} (\theta^{-\gamma} -(1+ \theta)^{-\gamma} ) + u_n (\eta, \varepsilon),\end{align*}
where
$u_n (\eta, \, \varepsilon)$
does not depend on
$\theta$
and satisfies
$\lim_{n\to \infty} u_n (\eta, \varepsilon)= 0$
. Thus,
where
Since
$\lim_{n\to \infty} b_n\tau_n^{\gamma} =1$
by (4.3), we have
$\lim_{n\to \infty}\sup_{\theta \in [a, \, b]}|R_n( \eta, \gamma, \theta) | = 0$
, which implies that
Note that
$\frac{(1+ \theta)^{\gamma} -1}{\theta^\gamma} = \gamma \int_0^1 \frac{\textrm{d} v}{(\theta^{-1} +v)^{1-\gamma}}$
, which is increasing in
$\theta$
. This entails (4.16).
We now turn to the proof of Lemma 6 in the second case. Applying Lemma 7 to
$a=\frac12(\eta \varepsilon)^{1/\gamma}$
and
$b=\gamma^2$
, and since
$\lim_{\varepsilon \to 0^+}\gamma^{-3} \gamma^{-2\gamma} ((1+ \gamma^2)^{\gamma} - 1)= 1$
, it follows that there exists
$\varepsilon_{4}(\eta) \in (0, \, \varepsilon_{3} (\eta))$
such that for
$\varepsilon \in (0,\, \varepsilon_{4} (\eta))$
there is an integer
$n_{9} (\eta, \, \varepsilon) \geq n_{8} (\eta, \, \varepsilon)$
such that for
$n\geq n_{9} (\eta, \, \varepsilon)$
and
$z\in \big( \frac12 (\eta \varepsilon)^{1/\gamma} \tau_n, \, \gamma^2 \tau_n \big)$
,
We recall that
$\textrm{LHS}_{(4.5)}$
stands for the expression on the left-hand side of (4.5). Then
where:
-
•
$\widetilde{\textrm{I}}_n(z) \;:\!=\; G_n(z)^2 - {{\mathbb P}}(Z_n + \widehat{Z}_n \le z)$
; -
•
$\widetilde{\textrm{I I}}_n(z) \;:\!=\; G_{n+1}\left(\frac{z}{1+\eta \gamma}\right) - G_n(z)$
; -
•
$\widetilde{\textrm{I I I}}_n(z) \;:\!=\; 2G_n(z) - G_n(z)^2 - {{\mathbb P}}(Z_n + \widehat{Z}_n \le z)=2G_n(z) - 2G_n(z)^2 + \widetilde{\textrm{I}}_n(z)$
.
We claim that there exists
$\varepsilon_{5} (\eta) \in (0, \, \varepsilon_{4} (\eta))$
such that for
$\varepsilon \in (0, \, \varepsilon_{5} (\eta))$
there is an integer
$n_{10} (\eta, \, \varepsilon)$
such that for
$n\geq n_{10} (\eta, \, \varepsilon)$
,
To see why (4.18) is true, we recall that
$G_n(z) = \int_{\sigma_n}^z G_n'(t) \, \textrm{d} t$
and that
${{\mathbb P}}(Z_n + \widehat{Z}_n \le z) = \int_{\sigma_n}^{z-\sigma_n} G_n'(t) G_n(z-t) \, \textrm{d} t$
. Since
$G_n(\sigma_n)= 0$
, by definition of
$G_n$
, we thus get
To get an upper bound for the first term on the right-hand side, we observe that
For
$t\in (z-\sigma_n, \, z)$
, we have
so that for
$z\in [(\eta \varepsilon)^{1/\gamma} \tau_n, \, \gamma^2 \tau_n)$
,
$$G_n(z) - G_n(z - \sigma_n)\le\frac{b_n \sigma_n \gamma z^\gamma}{z - \sigma_n}\le\frac{b_n \sigma_n \gamma z^\gamma}{(\eta \varepsilon)^{1/\gamma} \tau_n - \sigma_n}=\frac{\gamma}{\varepsilon \tau_n} \frac{b_n \tau_n^\gamma \sigma_n}{(\eta \varepsilon)^{1/\gamma} - \frac{\sigma_n}{\tau_n}} \varepsilon \Bigg( \frac{z}{\tau_n}\Bigg)^{ \gamma} .$$
By (4.17), this yields that
Let
$\varepsilon_{5} (\eta) \in (0, \, \varepsilon_{4} (\eta))$
be such that
$6 \gamma^3 < \eta$
for
$\varepsilon \in (0, \, \varepsilon_{5} (\eta))$
. Note that
Let
$\varepsilon \in (0, \, \varepsilon_{5} (\eta))$
. Since
by (4.3), whereas
$4 (1 - \eta) (\eta - 6 \gamma^3) >0$
, there exists
$n_{11} (\eta, \, \varepsilon) \geq n_{9} (\eta, \, \varepsilon)$
such that for
$n\ge n_{11} (\eta, \, \varepsilon)$
,
The specific choice of the factor in front of
$ \varepsilon G_n(z)^2$
in the right-hand side member of the last inequality is justified further.
To deal with
$\int_{\sigma_n}^{z-\sigma_n} G_n'(t) (G_n(z) - G_n(z - t)) \, \textrm{d} t$
, we recall that
$G'_n(t) \leq 2b_n \varphi_\gamma'(t)$
and we note that
$G_n(z) - G_n(z-t) \le \int_{z-t}^z 2b_n \varphi_\gamma'(s)\, \textrm{d} s = 2b_n (\varphi_\gamma(z) - \varphi_\gamma (z - t))$
, so
\begin{eqnarray*} \int_{\sigma_n}^{z-\sigma_n} G_n'(t) \big( G_n(z) - G_n(z - t) \big) \, \textrm{d} t &\,\le\,& 4b_n^2 \int_{\sigma_n}^{z-\sigma_n} \varphi_\gamma'(t) \big( \varphi_\gamma(z) - \varphi_\gamma (z - t) \big) \, \textrm{d} t \\[5pt] &\,\le\,& 4b_n^2 \int_{1}^{z-1} \varphi_\gamma'(t) \big( \varphi_\gamma(z) - \varphi_\gamma (z - t) \big) \, \textrm{d} t .\end{eqnarray*}
(Indeed, (4.7) implies that if
$z\geq (\eta \varepsilon)^{1/\gamma} \tau_n $
, then
$z> 2\sigma_n >2$
.) The integral on the right-hand side is
$\mathtt{cvl}_{\gamma, 1} (z)$
by our definition in (3.10). By Lemma 3, we get that
Observe that
which is bounded by
$(\tau_n^{2\gamma}/(1-2\gamma^3)^2) G_n(z)^2$
(see (4.17)). Therefore,
Recall that
$\zeta(2) \gamma^2 = 4(1 - \eta)^2 \varepsilon$
, and that
$\lim_{n\to \infty} b_n\tau_n^{\gamma}= 1$
(see (4.3)). Since
${1}/{(1-2\gamma^3)^2} < 1+ 6 \gamma^3$
(because
$\gamma < \tfrac13$
), there exists
$n_{10} (\eta, \, \varepsilon) \geq n_{11} (\eta, \, \varepsilon) $
such that for
$n\ge n_{10} (\eta, \, \varepsilon)$
, we have
We obtain (4.18) by (4.20), (4.21), and (4.19).
Let us consider
$\widetilde{\textrm{I I}}_n(z)$
. Let
$\varepsilon \in (0, \, \varepsilon_{5} (\eta))$
,
$n\geq n_{10} (\eta, \, \varepsilon)$
, and
$z\in [(\eta \varepsilon)^{1/\gamma} \tau_n, \, \gamma^2 \tau_n)$
. We start with the trivial inequality
$\widetilde{\textrm{I I}}_n(z) \le G_{n+1}(z) - G_n(z)$
, and look for an upper bound for
$G_{n+1}(z) - G_n(z)$
. Since
$\lim_{\varepsilon \to 0^+} (1 - (2+\eta)\varepsilon)^{1/\gamma} = 1$
and
$\lim_{\varepsilon \to 0^+} (\gamma^3/\varepsilon) = 0$
, there exists
$\varepsilon_{5} (\eta)\in (0, \, \min\{ \varepsilon_{5} (\eta), \, \eta/2\} )$
such that
We fix
$\varepsilon \in (0, \, \varepsilon_{6} (\eta))$
. Since
$\lim_{n\to \infty} \frac{\tau_n}{\tau_{n+1}} = (1 - (2+\eta)\varepsilon)^{1/\gamma}$
(by (4.3)), there exists an integer
$\varepsilon_{6} (\eta, \, \varepsilon) \geq n_{11} (\eta, \,\varepsilon) $
such that
for
$n\geq n_{12} (\eta, \, \varepsilon)$
, which implies that
$z\in \left(\frac{1}{2} (\eta \varepsilon)^{1/\gamma} \tau_{n+1}, \, \gamma^2 \tau_{n+1}\right)$
; so applying (4.17) twice leads to
Note that
(because
$\gamma < \frac13$
), and that
(see (4.3)). Thus, there exists an integer
$\varepsilon_{13} (\eta, \, \varepsilon) \geq n_{12} (\eta, \, \varepsilon)$
such that
$G_{n+1} (z) \leq (1 + 5 \gamma^3 ) \big( 1 - (2 + \eta) \varepsilon\big) G_n (z)$
for
$n\ge n_{13} (\eta, \, \varepsilon)$
and
$z\in [(\eta \varepsilon)^{1/\gamma} \tau_n, \, \gamma^2 \tau_n)$
; consequently,
Since
$\big(1 + 5 \gamma^3 \big) ( 1 - (2 + \eta) \varepsilon) -1 = 5\gamma^3 - (2 + \eta) \varepsilon - 5\gamma^3 (2 + \eta)\varepsilon < 5\gamma^3 - (2 + \eta) \varepsilon$
, which is smaller than
$-(2 + \tfrac12 \eta) \varepsilon$
(by (4.22)), we deduce that for
$n\ge n_{13} (\eta, \, \varepsilon)$
and
$z\in [(\eta \varepsilon)^{1/\gamma} \tau_n, \, \gamma^2 \tau_n)$
,
Finally, we turn to
$\widetilde{\textrm{I I I}}_n(z)$
, which is easy to estimate:
\begin{eqnarray} \widetilde{\textrm{I I I}}_n(z) &\,=\,& 2G_n(z) - 2G_n(z)^2 + \widetilde{\textrm{I}}_n(z) \nonumber \\[5pt] &\,\le\,& 2G_n(z) - 2G_n(z)^2 + 4 (1 - \eta)\varepsilon G_n(z)^2 \nonumber \\[5pt] &\,\le\,& 2G_n(z) - 2G_n(z)^2 + 4 \varepsilon G_n(z)^2 .\end{eqnarray}
Assembling (4.18), (4.23), and (4.24) yields that for
$\varepsilon \in (0, \, \varepsilon_{6} (\eta))$
,
$n\geq n_{13} (\eta,\, \varepsilon) $
, and
$z\in [(\eta \varepsilon)^{1/\gamma} \tau_n, \, \gamma^2 \tau_n)$
,

which is non-positive since
$\varepsilon< \varepsilon_{6} (\eta) < \frac{\eta}{2}$
. This completes the proof of Lemma 6 when
$z\in [(\eta \varepsilon)^{1/\gamma} \tau_n, \, \gamma^2 \tau_n)$
(i.e. in the second case).
Proof of Lemma 6: third case
$\gamma^2 \tau_n \le z < {\textrm{e}}^{1/\sqrt{\gamma}} \tau_n$
. We easily check that there exists
${q}_0 \in (0, \, \frac19)$
such that for all
${q} \in (0, q_0)$
the following holds true.
For any
${q} \in (0, \, {q}_0]$
, we define the function
Since
$\lim_{\theta\to 0^+} g_{q} (\theta) = 1$
,
$\lim_{\theta\to \infty} g_{q} (\theta) = 0$
, and
$g^\prime_{q} (\theta) = {q} ((\theta+1)^{{q}-1} - \theta^{{q} -1} ) < 0$
for
$\theta\in \mathbb{R}_+^*$
, it follows that
$g_{q}$
is a decreasing bijection from
$\mathbb{R}_+^*$
onto
$(0, \, 1)$
.
We collect some elementary properties of
$g_{q}$
.
Lemma 8. Let
${q} \in (0, \, {q}_0]$
. Then the following holds true.
-
(i) For
$a\in (0, \, {\textrm{e}}^{-1})$
and
$\theta \in [a, \, {\textrm{e}}^{1/\sqrt{{q}}}]$
, (4.29)
\begin{equation} g_{q} (\theta) \le \frac{6{q} \log \frac{1}{a}}{\theta + 1} .\end{equation}
-
(ii) For
$\theta \in [{q}^3, \, {\textrm{e}}^{1/\sqrt{{q}}})$
and
$r\in [1, \, 2]$
, (4.30)
\begin{equation} g_{q} \Big( \frac{\theta}{r} \Big) - g_{q} (\theta) \ge \frac{{q}(1 - \sqrt{{q}})(r -1) - {q} (r -1)^2}{\theta + 1}. \end{equation}
-
(iii) For
$\theta \in [0, \, {\textrm{e}}^{1/\sqrt{{q}}} ]$
, (4.31)
\begin{equation} \int_0^\theta |g_{q}' (t)| \, \big( g_{q} (\theta - t) - g_{q} (\theta) \big) \, \textrm{d} t \le (1 + 3\sqrt{{q}}) {q}^2 \frac{\zeta(2)}{\theta + 1} .\end{equation}
Proof. See Appendix B.
We proceed to the proof of Lemma 6 in the third case. Let
${q}_0\in (0, \, \frac19)$
be the small constant ensuring (4.25)–(4.27) hold. Fix
$\eta \in (0, \, 1)$
. We set
Let
$\varepsilon\in (0, \, \varepsilon_{7}(\eta))$
, where
$\varepsilon_{7}(\eta) \in (0, \, \min\{\varepsilon_{6}(\eta), \, \frac12\})$
is such that
The seemingly complicated form of the right-hand member of (4.35) is justified in the following: it is the sum of several explicit inequalities. We could have used slightly simpler terms, but the price would have been an extra layer of numbered constants, which we wanted to avoid for the sake of clarity.
We want to check (4.5) in Lemma 6 when
$z= \tau_n x$
, with
$x\in [\gamma^2, \, {\textrm{e}}^{1/\sqrt{\gamma}})$
. To this end we first need to find a lower bound for
${{\mathbb P}} (Z_{n+1}> x\tau_n /(1+ \eta \gamma) )$
. More precisely, in the third case (4.5) is equivalent to prove that there exists
${\varepsilon}'\in (0, \infty)$
(depending on
$\eta$
) such that for all
${\varepsilon}\in (0, {\varepsilon}')$
there is
$n({\varepsilon}')$
such that for all
$n\geq n({\varepsilon}')$
and all
$ x\in [\gamma^2, \, {\textrm{e}}^{1/\sqrt{\gamma}})$
:
where
$\Psi_{ p}$
is the transformation in (2.9).
To this end, we first study the tail probability of
$\frac{Z_n}{\tau_n}$
. Let
$a>0$
. By (4.4), for all sufficiently large n such that
$a \tau_n \ge \sigma_n$
, and all
$\theta\ge a$
,
\begin{eqnarray*} {{\mathbb P}} (Z_n > \theta \tau_n) &\,=\, & b_n\tau_n^{\gamma} \big( (\theta + 1)^\gamma - \theta^\gamma \big) - b_n\tau_n^{-\gamma} \big( \theta^{-\gamma} - (\theta + 1)^{-\gamma}\big) \\[5pt] &\,=\,& g_\gamma (\theta) \big( b_n\tau_n^{\gamma} - b_n\tau_n^{-\gamma} \theta^{-\gamma} (\theta+1)^{-\gamma} \big) ,\end{eqnarray*}
where
$g_\gamma$
is the function in (4.28). Hence,
Since
$\lim_{n\to \infty} b_n\tau_n^{\gamma} = 1$
and
$\lim_{n\to \infty} b_n\tau_n^{-\gamma} = 0$
(by (4.3)), this implies that
Note that
$\sup_{\theta \in [a, \, \infty )} g_\gamma (\theta) = g_\gamma (a) < \infty$
; thus, we also have
$\lim_{n\to \infty} \sup_{\theta \in [a, \, \infty )} \big| {{\mathbb P}} (Z_n > \theta \tau_n) - g_\gamma (\theta)\big| = 0$
. Taking
$a= \frac12 {\textrm{e}}^{-4/\sqrt{\gamma}}$
yields the existence of a positive integer
$n_{14} (\eta, \, \varepsilon)$
such that for
$n\ge n_{14}( \eta, \, \varepsilon)$
and
$\theta \in [\frac12 {\textrm{e}}^{-4/\sqrt{\gamma}}, \, \infty)$
,
In order to deal with
${{\mathbb P}} ( Z_{n+1} > x \tau_n/({1+\eta \gamma}))$
on the right-hand side of (4.36), we write
By (4.3),
$\lim_{n\to \infty} b_n \tau^\gamma_n =1$
, whereas by definition,
$\frac{b_{n+1}}{b_n} = 1-(2 + \eta) \varepsilon$
. By using the inequality
$(1- y)^{-z} \ge 1+ yz$
for
$y \in (0, \, 1)$
and
$z \ge 0$
, we therefore get
In addition, by (4.33),
$\lim_{n\to \infty} \rho_n (\gamma) < 2$
. Therefore, there exists an integer
$n_{15} (\eta, \, \varepsilon) \geq n_{14} (\eta, \, \varepsilon)$
such that for
$n\ge n_{15} (\eta, \, \varepsilon)$
,
Let
$x\in [\gamma^2, \, {\textrm{e}}^{1/\sqrt{\gamma}})$
. We have
and
(by the elementary inequality
$z^2 \le {\textrm{e}}^{4\sqrt{z}}$
, for
$z \ge 1$
), so we are entitled to apply (4.39) to
$\theta \;:\!=\; x/{\rho_n (\gamma)}$
to see that
To prove (4.36), we need to find an appropriate upper bound for
$[\Psi_{ \frac12 + \varepsilon}(\nu_n')] ((x \tau_n, \, \infty))$
. Let
$Z_n^* \;:\!=\; \frac{Z_n}{\tau_n}$
, and let
$\widehat{Z}_n^*$
denote an independent copy of
$Z_n^*$
. For
$\theta \in \big(\gamma^3, \, {\textrm{e}}^{1/\sqrt{\gamma}}\big)$
, we define the event
$B_n(\theta) \;:\!=\; \{ Z^*_n + \widehat{Z}_n^* \ge \theta \}$
, and note that
\begin{eqnarray*} {{\mathbb P}} (B_n(\theta)) &\,\le\,& {{\mathbb P}} \big(Z^*_n \geq \theta \big) + {{\mathbb P}} \big( \widehat{Z}_n^* \geq \theta ; \, Z^*_n < \theta \big ) + {{\mathbb P}} \big( \widehat{Z}_n^* < \theta ; \, Z^*_n < \theta; \, B_n(\theta)\big) \\[5pt] & \,=\, & 2{{\mathbb P}} \big(Z^*_n \geq \theta \big) - [{{\mathbb P}} \big(Z^*_n \geq \theta\big)]^2 + {{\mathbb P}} \big( \widehat{Z}_n^* < \theta ; \, Z^*_n < \theta; \, B_n(\theta)\big) .\end{eqnarray*}
Therefore,
\begin{eqnarray} \big[\Psi_{ \frac12 + \varepsilon}(\nu_n')\big] ((x \tau_n, \, \infty)) & \,=\, & \bigg(\frac{1}{2} + \varepsilon \bigg) {{\mathbb P}} (B_n(x)) + \bigg(\frac{1}{2} - \varepsilon \bigg)\, [{{\mathbb P}}( Z^*_n \ge x)]^2 \nonumber \\[5pt] &\,\le\,& (1 + 2\varepsilon) {{\mathbb P}} (Z^*_n \geq x) + \bigg(\frac{1}{2} + \varepsilon \bigg) {{\mathbb P}} ( \widehat{Z}_n^* < x ; \, Z^*_n < x; \, B_n(x)) .\end{eqnarray}
We first get an upper bound for the second term in the right-hand side member of (4.41). To this end, we write
$x_1 \;:\!=\; x - {\textrm{e}}^{-4/\sqrt{\gamma}}$
, so by (4.26),
$x_1 \ge \gamma^2 - {\textrm{e}}^{-4/\sqrt{\gamma}} > \gamma^3 > {\textrm{e}}^{-1/\sqrt{\gamma}}$
. We consider
$\{ \widehat{Z}_n^* < x\}$
as the union of
$\{ \widehat{Z}_n^* < {\textrm{e}}^{-4/\sqrt{\gamma}} \}$
,
$\{ \widehat{Z}_n^* \in [{\textrm{e}}^{-4/\sqrt{\gamma}}, \, x_1]\}$
, and
$\{ \widehat{Z}_n^* \in (x_1, \, x)\}$
. On
$\{ \widehat{Z}_n^* < {\textrm{e}}^{-4/\sqrt{\gamma}} \} \cap B_n(x)$
, we have
$Z_n > x_1$
. This implies that
\begin{eqnarray} &&\big[\Psi_{ \frac12 + \varepsilon}(\nu_n')\big] ((x \tau_n, \, \infty)) \nonumber \\[5pt] & \,\le\, & (1 + 2\varepsilon) {{\mathbb P}} (Z^*_n \geq x) + (1 + 2\varepsilon) {{\mathbb P}} (Z^*_n \in (x_1, \, x) ) \nonumber \\[5pt] && +\, \bigg(\frac{1}{2} + \varepsilon \bigg) {{\mathbb P}} \big( \widehat{Z}_n^* \in \big[{\textrm{e}}^{-4/\sqrt{\gamma}}, \, x_1\big] ; \, Z^*_n \in \big(x - \widehat{Z}_n^*, \, x\big)\big) \nonumber \\[5pt] & \,=\, & (1 + 2\varepsilon) {{\mathbb P}} (Z^*_n > x_1) + \bigg(\frac{1}{2} + \varepsilon \bigg) {{\mathbb P}} \big( \widehat{Z}_n^* \in \big[{\textrm{e}}^{-4/\sqrt{\gamma}}, \, x_1\big] ; \, Z^*_n \in \big(x - \widehat{Z}_n^*, \, x\big)\big) . \end{eqnarray}
We then estimate the two probability expressions on the right-hand side. Since we have proved that
$x_1 > e^{-1/\sqrt{\gamma}}$
, (4.39) applies with
$\theta= x_1$
and we get
${{\mathbb P}} (Z^*_n \ge x_1) \le g_\gamma (x_1) + {\textrm{e}}^{-2/\sqrt{\gamma}}$
. By the mean-value theorem,
$g_\gamma (x_1) - g_\gamma (x) = -(x - x_1)g_\gamma'(y) = - {\textrm{e}}^{-4/\sqrt{\gamma}}g_\gamma'(y)$
for some
$y\in [x_1, \, x]$
. Since
$-g_\gamma'(y) = \gamma [y^{-(1-\gamma)} - (y + 1)^{-(1-\gamma)}] \le \gamma y^{-(1-\gamma)} \le \gamma (x_1)^{-(1-\gamma)} < \gamma^{1-3(1-\gamma)} \le \gamma^{-2}$
(we have used the fact that
$x_1 > \gamma^3$
), we obtain that
(here, we have used also the elementary inequality
$z^2 \le {\textrm{e}}^{2\sqrt{z}}$
for
$z\ge 1$
). Therefore,
By (4.29) (applied to
$a= {\textrm{e}}^{-1/\sqrt{\gamma}}$
and
${q} = \gamma$
, which is possible since
$\gamma < {q}_0$
),
$g_\gamma (x) \le \frac{6 \gamma^{1/2}}{x+1}$
. Since
$2(1 + 2\varepsilon) \le 3$
, this implies that
\begin{eqnarray} (1 + 2\varepsilon) {{\mathbb P}} (Z^*_n > x_1) &\,\le\, & g_\gamma (x)+ 2{\varepsilon} g_\gamma (x) + 2(1+2{\varepsilon}) {\textrm{e}}^{-2/\sqrt{\gamma}} \nonumber\\[5pt] & \,\leq\, & g_\gamma (x) + \frac{12 \varepsilon \gamma^{1/2}}{x + 1} + 3{\textrm{e}}^{-2/\sqrt{\gamma}} . \end{eqnarray}
We next prove an upper bound for
${{\mathbb P}} (\widehat{Z}_n^* \in [{\textrm{e}}^{-4/\sqrt{\gamma}}, \, x_1] ; \, Z^*_n \in (x - \widehat{Z}_n^*, \, x))$
thanks to (4.39). We fix
$t\in (0, \, x_1)$
and we first observe that
$x\geq x-t \geq x-x_1= e^{-4/\sqrt{\gamma}}$
. Then (4.39) applies with
$\theta$
equal to x and
$x-t$
and we get
To simplify, we next denote by
$f_{Z^*_n}({\cdot})$
the density of
$Z^*_n$
and we set
$\gamma_1 \;:\!=\; {\textrm{e}}^{-4/\sqrt{\gamma}} = x - x_1$
. Then
\begin{align} {{\mathbb P}} ( \widehat{Z}_n^* \in [\gamma_1, \, x_1] ; \, Z^*_n \in (x - \widehat{Z}_n^*, \, x)) &= \int_{\mathbb R} f_{Z^*_n} (t) \textbf 1_{[\gamma_1 , x_1]} (t) {{\mathbb P}} \big( Z^*_n \in (x-t, x] \big) \, \textrm{d} t \nonumber \\[5pt] &\le \int_{\mathbb R}\textbf 1_{[\gamma_1 , x_1]} (t) f_{Z^*_n}(t) \big( g_\gamma(x - t) - g_\gamma(x) \big) \, \textrm{d} t + 2{\textrm{e}}^{-2/\sqrt{\gamma}}. \end{align}
Then, by Fubini, observe that
\begin{align*} \int_{\mathbb R} \textbf 1_{[\gamma_1 , x_1]} (t) f_{Z^*_n}(t) \big( g_\gamma(x - t) - g_\gamma(x) \big) \, \textrm{d} t &= \int_{\mathbb R} \textrm{d} t \int_{\mathbb R} \textrm{d} s \, f_{Z^*_n} (t) \textbf 1_{[\gamma_1 , x_1]} (t) \textbf 1_{[0, t]} (x-s) \big( - g'_\gamma(s) \big)\\[5pt] & = \int_{0}^{x_1} \textrm{d} u \int_{\mathbb R} \textrm{d} t \, f_{Z^*_n} (t) \textbf 1_{[\gamma_1 , x_1]} (t) \textbf 1_{[u, \infty)} (t) \big|g'_\gamma(x-u) \big| \\[5pt] &= \int_{0}^{x_1} \big|g'_\gamma(x-u) \big| {{\mathbb P}} \big( Z^*_n \in [\gamma_1 \vee u, x_1]\big) \, \textrm{d} u. \end{align*}
Note that if
$u\in [0, x_1]$
, then
$x_1 \geq u\vee \gamma_1 \geq \gamma_1 > \frac12 {\textrm{e}}^{-4/\sqrt{\gamma}}$
. Thus, (4.39) applies and we get
\begin{align}& \int_{0}^{x_1} \big|g'_\gamma(x-u) \big| {{\mathbb P}} ( Z^*_n \in [\gamma_1 \vee u, x_1]) \textrm{d} u \leq \int_{0}^{x_1} \big|g'_\gamma(x - u) \big| ( g_\gamma (u \vee \gamma_1) - g_\gamma (x_1) + 2e^{-2/\sqrt{\gamma}}) \textrm{d} u\nonumber \\[5pt] &\leq \int_{0}^{x} \big|g'_\gamma(x- u) \big| \big( g_\gamma (u) - g_\gamma (x) \big) \, \textrm{d} u + 2e^{-2/\sqrt{\gamma}} \int_{0}^{x_1} \big|g'_\gamma(x- u) \big| \, \textrm{d} u ,\end{align}
since
$g_\gamma$
is decreasing. Then
\begin{eqnarray} \int_{0}^{x_1} \big|g'_\gamma(x-u) \big| {{\mathbb P}} \big( Z^*_n & \,\in\, & [\gamma_1 \vee u, x_1]\big) \textrm{d} u \nonumber \\[5pt] &\,\leq \, & \int_{0}^{x} \big|g'_\gamma(t) \big| \big( g_\gamma (x - t) - g_\gamma (x) \big) \textrm{d} t + 2e^{-2/\sqrt{\gamma}} \big( g_{\gamma} (\gamma_1) - g_\gamma (x_1) \big)\nonumber \\[5pt] &\,\leq \, & (1 + 3\sqrt{\gamma}\,) \gamma^2 \, \frac{\zeta(2)}{x + 1} + 2e^{-2/\sqrt{\gamma}}\end{eqnarray}
by (4.31), Lemma 8(iii) (with
${q}=\gamma< {q}_0$
and
$ \theta= x < {\textrm{e}}^{1/\sqrt{\gamma}}$
), and since
$x - x_1= \gamma_1 $
and
$ g_{\gamma} (\gamma_1) - g_\gamma (x_1)\leq g_{\gamma} (\gamma_1) < 1$
. By (4.45) and (4.47) we then get
Since
$(\frac{1}{2} + \varepsilon)(1 + 3\sqrt{\gamma}\,) \le \frac{1}{2} + 2 \sqrt{\gamma}$
by (4.34) and
$\frac{1}{2} + \varepsilon \le 1$
, it follows that
Combined with (4.42) and (4.44), we obtain that
\begin{align*} [\Psi_{ \frac12 + \varepsilon}(\nu_n')] ((x \tau_n, \, \infty)) & \le g_\gamma (x) + 3{\textrm{e}}^{-2/\sqrt{\gamma}} + \frac{12 \varepsilon \gamma^{1/2}}{x+1} + \bigg(\frac{1}{2} + 2\sqrt{\gamma}\bigg) \gamma^2 \frac{\zeta(2)}{x + 1} +4{\textrm{e}}^{-2/\sqrt{\gamma}} \\[5pt] & = g_\gamma (x) + \frac{12\varepsilon \gamma^{1/2} + \bigg(\frac{1}{2} + 2\sqrt{\gamma}\bigg) \gamma^2 \zeta(2)}{x + 1} + 7{\textrm{e}}^{-2/\sqrt{\gamma}}.\end{align*}
In view of (4.40), this yields that
\begin{eqnarray*} && {{\mathbb P}} \bigg( Z_{n+1} > \frac{x \tau_n}{1+ \eta \gamma} \bigg) - [\Psi_{ \frac12 + \varepsilon}(\nu_n')] ((x \tau_n, \, \infty)) \\[5pt] &\,\ge\,& g_\gamma \bigg( \frac{x}{1 + \eta \gamma + \frac{(2+ \eta) \varepsilon}{\gamma}} \bigg) - g_\gamma (x) - {\textrm{e}}^{-2/\sqrt{\gamma}} - \frac{12\varepsilon \gamma^{1/2} + (\frac{1}{2} + 2\sqrt{\gamma}\,) \gamma^2 \zeta(2)}{x + 1} - 7{\textrm{e}}^{-2/\sqrt{\gamma}} \\[5pt] &\,=\,& g_\gamma \bigg( \frac{x}{1 + \eta \gamma + \frac{(2+ \eta) \varepsilon}{\gamma}} \bigg) - g_\gamma (x) - \frac{\frac{\zeta(2)}{2} \gamma^2 + (2 \gamma^{5/2} \zeta(2) + 12\varepsilon \gamma^{1/2})}{x + 1} - 8{\textrm{e}}^{-2/\sqrt{\gamma}} .\end{eqnarray*}
We easily check that we can apply (4.30) with
${q}= \gamma <{q}_0$
,
$\theta = x \in [\gamma^3, {\textrm{e}}^{1/\sqrt{\gamma}} ]$
, and
$r= 1 + \eta \gamma + \frac{(2 + \eta) \varepsilon}{\gamma} \in [1,2]$
. Then we get
\begin{eqnarray*} g_\gamma \bigg( \frac{x}{1 + \eta \gamma + \frac{(2 + \eta) \varepsilon}{\gamma}} \bigg) - g_\gamma (x) &\,\ge\,& \frac{\gamma(1 - \gamma^{1/2})\left(\eta \gamma + \frac{(2 + \eta) \varepsilon}{\gamma}\right) - \gamma \left(\eta \gamma + \frac{(2 + \eta) \varepsilon}{\gamma}\right)^2}{x + 1} \\[5pt] &\,=\,& \frac{\eta \gamma^2 + (2 + \eta) \varepsilon- \left(\eta \gamma + \frac{(2 + \eta) \varepsilon}{\gamma}\right) (\gamma^{3/2} + \eta \gamma^2 + (2+ \eta) \varepsilon)}{x + 1} .\end{eqnarray*}
As a consequence,
where
Recall that
$\eta\in (0, \, 1)$
is a constant and recall from (4.32), (4.33), (4.34), and (4.35) that
$\gamma = ({2}/{\sqrt{\zeta(2)}}) (1 - \eta)\sqrt{\varepsilon}$
. Thus,
where
Since
$8 (x +1) {\textrm{e}}^{-2/\sqrt{\gamma}} \le 8 ({\textrm{e}}^{1/\sqrt{\gamma}} + 1) {\textrm{e}}^{-2/\sqrt{\gamma}} \le 16 {\textrm{e}}^{-1/\sqrt{\gamma}}$
, it follows from (4.35) that
Therefore, we have found
$\varepsilon_{7}(\eta) \in (0, 1)$
such that for all
$\varepsilon\in (0, \, \varepsilon_{7}(\eta))$
, there is
$ n_{15} (\eta, \, \varepsilon)$
such that (4.36) holds true for all
$n\ge n_{15} (\eta, \, \varepsilon)$
and all
$x\in [\gamma^2 , {\textrm{e}}^{1/\sqrt{\gamma}} ]$
, which completes the proof of Lemma 6 in the third case
$\gamma^2 \tau_n \leq z < {\textrm{e}}^{1/\sqrt{\gamma}} \tau_n$
.
Proof of Lemma 6: fourth (and last) case
$z \ge {\textrm{e}}^{1/\sqrt{\gamma}} \tau_n$
. Let
$\eta, {\varepsilon} \in (0, \, 1)$
and recall the definition of
$\gamma$
from (4.1): namely,
$\gamma = \gamma(\varepsilon, \, \eta) \;:\!=\; 2\zeta(2)^{-1/2} (1 - \eta)\sqrt{\varepsilon}$
. We fix
$\eta$
and we easily check that we can find
$\varepsilon_{8}(\eta) \in (0, \, \varepsilon_{7}(\eta))$
such for all
${\varepsilon} \in (0, \varepsilon_{8}(\eta))$
, the following inequalities hold true:
The following lemma gives an estimate of
${{\mathbb P}} (Z_n \ge \tau_n \theta)$
when
$\theta$
is sufficiently large.
Lemma 9. Let
$\eta \in (0, \, 1)$
. Let
${\varepsilon} \in (0, \, \varepsilon_{8} (\eta))$
and
$n \geq n_{15} (\eta, \, {\varepsilon})$
. Then
Furthermore,
Proof. For
$\theta \in \mathbb{R}^*_+$
, write as before
Thus,
On the other hand,
Thus,
By (4.38), for
$n\ge n_{15}( \eta, \, \varepsilon)$
and
$\theta \in [\frac12 {\textrm{e}}^{-4/\sqrt{\gamma}}, \, \infty)$
, we get
If
$\theta > {\textrm{e}}^{1/(2\sqrt{\gamma})}$
, then
$\frac{1}{1+\theta} \le \frac{1}{\theta} \le {\textrm{e}}^{-1/(2\sqrt{\gamma})}$
, so
$(1 - {\textrm{e}}^{-2/\sqrt{\gamma}})\frac{\theta}{1+\theta} \ge 1 - \frac{1}{1+\theta} - {\textrm{e}}^{-2/\sqrt{\gamma}} \ge 1 - 2{\textrm{e}}^{-1/(2\sqrt{\gamma})}$
, and (4.49) follows.
To get (4.50), it suffices to take
$\theta \;:\!=\; {\textrm{e}}^{3/(4\sqrt{\gamma})}$
, and note that in this case,
whereas
$1 + {\textrm{e}}^{-2/\sqrt{\gamma}} \le 2$
, so (4.50) follows from the second inequality in (4.51).
We proceed to the proof of Lemma 6 in the fourth case:
$z \ge {\textrm{e}}^{1/\sqrt{\gamma}} \tau_n$
. Let
${\varepsilon} \in (0, \, \varepsilon_{8} (\eta))$
,
$n \geq n_{15} (\eta, \, {\varepsilon})$
, and
$x \in [{\textrm{e}}^{1/\sqrt{\gamma}}, \, \infty)$
. We write as before
$Z_n^* \;:\!=\; \frac{Z_n}{\tau_n}$
where
$\widehat{Z}_n^*$
denotes an independent copy of
$Z_n^*$
. We have, for
$x' \in (0, \, x)$
,
\begin{eqnarray*} {{\mathbb P}}\big(Z^*_n + \widehat{Z}^*_n > x\big) &\,\le\,& {{\mathbb P}}(Z^*_n > x') + {{\mathbb P}}(\widehat{Z}^*_n > x') + {{\mathbb P}}(\widehat{Z}^*_n > x - x', \, Z^*_n > x-x') \\[5pt] & \,=\, &2 {{\mathbb P}}(Z^*_n > x') + \big( {{\mathbb P}}(Z^*_n > x - x') \big)^2 .\end{eqnarray*}
We now take
$x' \;:\!=\; (1 - {\textrm{e}}^{-1/(4\sqrt{\gamma}\,)})x$
, so
$x' \in [{\textrm{e}}^{1/(2\sqrt{\gamma})}, \, \infty)$
(because
$1 - {\textrm{e}}^{-1/(4\sqrt{\gamma}\,)} > {\textrm{e}}^{-1/(2\sqrt{\gamma})}$
as
$\sqrt{\gamma}\le \tfrac{1}{20}$
according to (4.48)) and
$x - x'\in [{\textrm{e}}^{3/(4\sqrt{\gamma})}, \, \infty)$
. By (4.50),
Hence,
We easily check that we can apply (4.49) to
$\theta =x'$
and to
$\theta= x - x'$
, which yields
$${{\mathbb P}}\big(Z^*_n + \widehat{Z}^*_n > x\big)\le\left( 2\left(\frac{x}{x'}\right)^{1-\gamma} +{\textrm{e}}^{-3/(4\sqrt{\gamma})}\, \left(\frac{x}{x - x'}\right)^{1-\gamma} \right) \frac{1 + {\textrm{e}}^{-2/\sqrt{\gamma}}}{1 - 2{\textrm{e}}^{-1/(2\sqrt{\gamma})}} \, {{\mathbb P}}(Z^*_n > x) .$$
We have
(noting that
$(1 + {\textrm{e}}^{-1/(5\sqrt{\gamma})})(1 - {\textrm{e}}^{-1/(4\sqrt{\gamma})}) > 1$
because
$\sqrt{\gamma} \le \tfrac{1}{20}$
by (4.48)),
Thus,
which implies that
\begin{eqnarray*} {{\mathbb P}}\big(Z^*_n + \widehat{Z}^*_n > x\big) & \,\le\, & 2\big(1 + 2{\textrm{e}}^{-1/(5\sqrt{\gamma})}\big)\, \frac{1 + {\textrm{e}}^{-2/\sqrt{\gamma}}}{1 - 2{\textrm{e}}^{-1/(2\sqrt{\gamma})}} \, {{\mathbb P}}(Z^*_n > x) \\[5pt] & \,\le\, & \frac{2\big(1 + 2{\textrm{e}}^{-1/(5\sqrt{\gamma})}\big)^2}{1 - 2{\textrm{e}}^{-1/(2\sqrt{\gamma})}}\, {{\mathbb P}}(Z^*_n > x) .\end{eqnarray*}
Let
$V_n \;:\!=\; (Z_n + \widehat{Z}_n) \mathscr{E}_n + (Z_n \wedge \widehat{Z}_n) (1 - \mathscr{E}_n)$
, where
$\mathscr{E}_n$
denotes a Bernoulli random variable that is independent of
$(Z_n, \widehat{Z}_n)$
and such that
${{\mathbb P}}(\mathscr{E}_n = 1) = \frac{1}{2} + \varepsilon$
. Then
We use the trivial inequality
$\frac12 - \varepsilon \le 1$
. By (4.52)
${{\mathbb P}}(Z^*_n > x) \le {{\mathbb P}}(Z^*_n > x - x') \le {\textrm{e}}^{-3/(4\sqrt{\gamma})}$
. Therefore,
\begin{eqnarray*} {{\mathbb P}} (V_n \geq x \tau_n) & \,\le\, & (1 + 2\varepsilon) \, \frac{\big(1 + 2{\textrm{e}}^{-1/(5\sqrt{\gamma})}\big)^2}{1 - 2{\textrm{e}}^{-1/(2\sqrt{\gamma})}} \, {{\mathbb P}}(Z^*_n > x) +{\textrm{e}}^{-3/(4\sqrt{\gamma})} \, {{\mathbb P}}(Z^*_n > x) \\[5pt] & \,\le\, & (1 + 2\varepsilon) \, \frac{\big(1 + 2{\textrm{e}}^{-1/(5\sqrt{\gamma})}\big)^3}{1 - 2{\textrm{e}}^{-1/(2\sqrt{\gamma})}} \, {{\mathbb P}}(Z^*_n > x) .\end{eqnarray*}
To complete the proof of the lemma, we observe that

We can apply (4.49) to
$x \geq {\textrm{e}}^{1/\sqrt{\gamma}} > {\textrm{e}}^{1/(2\sqrt{\gamma})}$
and we get
${{\mathbb P}}(Z^*_n > x) \le ({\gamma}/{x^{1-\gamma}}) (1 + {\textrm{e}}^{-2/\sqrt{\gamma}})$
, which is bounded by
$({\gamma}/{x^{1-\gamma}}) \big(1 + 2{\textrm{e}}^{-1/(5\sqrt{\gamma})}\big)$
. Thus,
We look for a lower bound for
${{\mathbb P}} ( Z^*_{n+1} > ({\tau_n}/{\tau_{n+1}}) x )$
. By definition,
$\lim_{n\to \infty} \tau_{n}/\tau_{n+1}$
$=$
$(1 - (2 + \eta)\varepsilon)^{1/\gamma}$
and
$\lim_{{\varepsilon} \to 0^+} (1 - (2 + \eta)\varepsilon)^{1/\gamma} = 1$
. Therefore, there exists
$\varepsilon_{9} (\eta) \in (0,\, \varepsilon_{8}(\eta))$
such that for
${\varepsilon} \in (0, \,\varepsilon_{9} (\eta))$
, there is an integer
$n_{7} (\eta, {\varepsilon}) \ge n_{15} (\eta, {\varepsilon})$
such that for all
$n\geq n_{16} (\eta, \, {\varepsilon})$
,
Since
$\frac{\tau_{n}}{\tau_{n+1}} x \ge {\textrm{e}}^{1/(2 \sqrt{\gamma}) }$
, (4.49) applies to
$\theta \;:\!=\; \frac{\tau_{n}}{\tau_{n+1}} x$
and for all
${\varepsilon} \in (0, \, \varepsilon_{9} (\eta))$
and all
$n\ge n_{16} (\eta, \, {\varepsilon})$
, we get
\begin{eqnarray*} {{\mathbb P}} \bigg( Z^*_{n+1} > \frac{\tau_n}{\tau_{n+1}} x \bigg) & \,\ge\, & \big( 1 - 2{\textrm{e}}^{-1/(2\sqrt{\gamma})}\big) \Big( \frac{ \tau_{n+1}}{\tau_{n}} \Big)^{1-\gamma} \frac{\gamma}{x^{1-\gamma}} \\[5pt] & \,\ge\, & \big( 1 - 2{\textrm{e}}^{-1/(2\sqrt{\gamma})}\big)^2 \, (1 - (2 + \eta)\varepsilon )^{-(1-\gamma) /\gamma} \frac{\gamma}{x^{1-\gamma}} .\end{eqnarray*}
Going back to (4.53), for all
${\varepsilon} \in (0, \, \varepsilon_{9} (\eta))$
, all
$n\ge n_{16} (\eta, {\varepsilon})$
, and all
$x \in [{\textrm{e}}^{1/\sqrt{\gamma}} , \, \infty)$
, we obtain

which is negative according to (4.48). This implies Lemma 6 in the fourth case, and thus completes the proof of the lemma.
5. Proof of Theorem 2
Let
$({\mathtt{Graph}}_n(p))_{n\in \mathbb N}$
be the sequence of graphs that are constructed as explained in the introduction, i.e.
${\mathtt{Graph}}_{n+1}(p)$
is obtained by replacing each edge of
${\mathtt{Graph}}_n(p)$
, either by two edges in series with probability
$p = \tfrac12 + {\varepsilon}$
, or by two parallel edges with probability
$1-p= \tfrac12 - {\varepsilon}$
, whereas
${\mathtt{Graph}}_0(p)$
is the graph of two vertices connected by an edge.
Let m be a non-negative integer. We construct a Galton–Watson branching process
$(Z^{_{(m)}}_k)_{k\in \mathbb N}$
whose offspring distribution is the law of
$D_m(p)$
, such that
$Z^{_{(m)}}_1 = D_m(p)$
and such that
Indeed, suppose that
$D_{km}(p) \leq Z^{_{(m)}}_k$
. Then
${\mathtt{Graph}}_{(k+1)m}(p)$
is obtained by replacing each edge of
${\mathtt{Graph}}_{km}(p)$
by an independent copy of
${\mathtt{Graph}}_m(p)$
. Choose a geodesic path in
${\mathtt{Graph}}_{km}(p)$
and denote by
$D_{m,j}(p)$
the distance joining the extreme vertices of the graph
${\mathtt{Graph}}_{m, j}(p)$
which replaces the jth edge of the specified geodesic path of
${\mathtt{Graph}}_{km}(p)$
in the recursive construction of
${\mathtt{Graph}}_{(k+1)m}(p)$
from
${\mathtt{Graph}}_{km}(p)$
. Conditionally given
${\mathtt{Graph}}_{km}(p)$
, the
${\mathtt{Graph}}_{m,j}(p)$
are independent and identically distributed (i.i.d.) with the same law as
${\mathtt{Graph}}_m(p)$
, as mentioned previously. It entails
$D_{(k+1)m} (p) \leq \sum_{1\leq j \leq D_{km}(p) } D_{m,j}(p)$
. Let
$(\Delta (k,j))_{k,j\in \mathbb N}$
be an array of i.i.d. random variables with the same law as
$D_m(p)$
. Assume, furthermore, that
$(\Delta (k,j))_{k,j\in \mathbb N}$
is independent of the graphs
$({\mathtt{Graph}}_n(p))_{n\in \mathbb N}$
. Then, we set
$D_{m,j}(p)= \Delta (k,j - D_{km} (p))$
for all integers
$j > D_{km}(p)$
and we set
$Z^{_{(m)}}_{k+1} = \sum_{1\leq j \leq Z^{(m)}_k} D_{m, j}(p)$
. Therefore,
$ D_{(k+1)m} (p) \leq Z^{_{(m)}}_{k+1}$
and conditionally given
$Z^{_{(m)}}_k$
,
$Z^{_{(m)}}_{k+1} $
is distributed as the sum of
$Z^{_{(m)}}_k$
i.i.d. random variables having the same law as
$D_m(p)$
. This completes the proof of (5.1).
For all
$k\in \mathbb N$
, we set
$W^{_{(m)}}_k= Z^{_{(m)}}_k / {{\mathbb{E}}} [D_m(p)]^k$
. Then
$(W^{_{(m)}}_k)_{k\in \mathbb N}$
is a martingale which is bounded in
$L^2$
(here the support of the law of
$D_m(p)$
is finite). Therefore,
${{\mathbb P}}$
-a.s.
$\lim_{k\to \infty}W^{_{(m)}}_k = W^{_{(m)}}_\infty >0$
(because there is no extinction since
${{\mathbb P}} (D_m(p) = 0) = 0$
). Since
$(D_n(p))_{n\in \mathbb N}$
is a non-decreasing sequence of random variables, we
${{\mathbb P}}$
-a.s. get for all
$n\in \mathbb N$
that
Therefore, for all
$m\geq 2$
,
which implies the last inequality in (1.4) in Theorem 2.
Let us prove the first inequality in (1.4). To this end, let
$Y_n$
be a random variable with law
$\nu_n$
as defined in Lemma 1. As already explained in the proof of the lower bound of Theorem 1, for all
$\eta \in (0, \eta_0)$
, there exists
$ {\varepsilon}_\eta \in (0, 1/2)$
such that for all
$\varepsilon\in (0, \, {\varepsilon}_\eta)$
, there is
$n_{\eta, {\varepsilon}} \in \mathbb N$
and
$\theta_{\eta, {\varepsilon}}$
that satisfy
$\theta_{\eta, {\varepsilon}} Y_n {\buildrel {\textrm st} \over \le} D_n $
for all
$n \geq n_{\eta, {\varepsilon}}$
and, thus,
where we recall from (3.2) that
$\varphi_\delta (x)= \tfrac{1}{2}(x^\delta + x^{-\delta})-1$
, that
and that
$\lambda_n$
is such that
$2a_n \varphi_{\delta} (\lambda_n)= 1$
. Since
$\lim_{n\to \infty} a_n \lambda_n^{\delta} = 1$
(by Lemma 1(i)) we get
$2a_n \varphi_\delta (n^{-2/\delta} \lambda_n) \sim_{n\to \infty} n^{-2}$
and
$\sum_{n\geq 1} {{\mathbb P}} ( D_n \leq \theta_{\eta, {\varepsilon}} n^{-2/\delta} \lambda_n) <\infty$
, which implies by Borel–Cantelli that
Since
$\lim_{n\to \infty} a_n \lambda_n^{\delta} = 1$
, it implies for all
$\eta \in (0, \eta_0)$
and for all
$\varepsilon\in (0, \, {\varepsilon}_\eta)$
that
Then observe that
This easily entails the existence of a function
$\widetilde{\alpha} ({\cdot})$
as in the statement of Theorem 2.
Then we derive (1.4) from (1.5) by noticing that for all
$p,p'\in [0, 1]$
such that
$p<p'$
, we have
which is an easy consequence of the equation (1.1): we leave the details to the reader.
Appendix A. Heuristic derivation of (2.3) from (2.1) using the scaling form (2.2)
For
$p=\frac{1}{2} + \varepsilon$
, we can rewrite (2.1) as
where
\begin{eqnarray*} S_1 &\;:\!=\;& \frac{1}{2} \sum_{1 \le i <k} a_n(i)\big(a_n(k-i) -2a_n(k)\big) , \\[5pt] S_2 &\;:\!=\; &-2 a_n(k) + \sum_{1 \le i <k} a_n(i)\big(a_n(k-i) +2a_n(k)\big) \quad \textrm{and} \quad S_3 \;:\!=\; -(1-p) a_n(k)^2 .\end{eqnarray*}
Using the scaling form (2.2)
with f regular and bounded, and the fact that
we have
and, thus,
$$S_1 = \frac{1}{kn} \sum_{1 \le i <k} \frac{f \Big( n, \, \frac{\log i}{\sqrt{n}}\Big)}{i} \left[ f\left( n, \, \frac{\log (k-i)}{\sqrt{n}} \right) - f\left( n, \, \frac{\log k}{\sqrt{n}} \right)\right] .$$
Writing
$\log k= x \sqrt{n}$
, we get that
\begin{eqnarray*} &&f\left( n, \, \frac{\log i}{\sqrt{n}} \right) \left[ f\left( n, \, \frac{\log (k-i)}{\sqrt{n}} \right) -f\left( n, \, \frac{\log k}{\sqrt{n}} \right)\right] \\[5pt] &\,\approx\,& f\left( n, \, x+ \frac{\log \frac{i}{k}}{\sqrt{n}} \right) \left[ f\left( n, \, x+\frac{\log (1-\frac{i}{k})}{\sqrt{n}} \right) -f\big( n, \,x \big)\right] \\[5pt] &\,\approx\,& f\big( n, \, x ) \, \partial_x f\big( n, \, x ) \,\frac{\log (1-\frac{i}{k})}{\sqrt{n}} .\end{eqnarray*}
By taking
$i=k u$
, we get that, for large n,
\begin{eqnarray*} S_1 &\,\approx\,& \frac{1}{k \, n \, \sqrt{n}} \, f\big( n, \, x ) \,\partial_x f\big( n, \, x ) \, \frac1k \sum_{1 \le i <k} \frac{\log (1-\frac{i}{k})}{i/k} \\[5pt] &\,\approx\,& \frac{1}{k \, n \, \sqrt{n}} \, f\big( n, \, x \big)\, \partial_x f\big( n, \, x \big) \, \int_0^1 \frac{\log(1-u)}{u} \, \textrm{d} u .\end{eqnarray*}
Similarly, we can show that
On the other hand,
$$a_n(k)^2 = \frac1n \mathcal{O} \Bigg( \frac{1}{k^2}\Bigg) ,$$
and
By neglecting terms of order
$\mathcal{O}(k^{-2})$
and of order
$k^{-1} o\big(n^{- 3/2}\big)$
, we can rewrite (2.1) as
\begin{eqnarray*} &&\frac{1}{k \, n \, \sqrt{n}} \Bigg[ n \partial_n f(n, \, x) -\frac12 f(n,\, x) - \frac12 \partial_x f(n, \, x) \Bigg] \\[5pt] &\,=\,& \frac{1}{k \, n \, \sqrt{n}} \, f\big( n, \, x\big) \, \partial_x f\big( n, \, x\big) \, \int_0^1 \frac{\log(1-u)}{u} \, \textrm{d} u \\[5pt] && + \frac{\varepsilon}{k \, \sqrt{n}} \Bigg[ -2f\big( n, \, x \big) + 4 f\big( n, \, x \big) \int_0^x f\big( n, \, y \big) \, \textrm{d} y \Bigg] ,\end{eqnarray*}
and this leads to (2.3).
Appendix B. Proofs of Lemmas 4 and 8
Proof of Lemma 4
-
(i) Observe that
$\varphi^\prime_{q} (x)= ({{q}}/{x}) \sinh ({q} \log x)$
that is non-negative on
$[1, \infty)$
. Since
$\varphi^\prime_{q} (1)= 0= \lim_{x\to \infty} \varphi^\prime_{q}(x)$
, there exists
$x_{{q}}\in (1, \infty)$
such that
$\varphi_{q}^\prime (x_{q}) = \sup_{x\in [1, \infty)} \varphi^\prime_{q}(x) \;=\!:\; M_{q}$
. Then Thus,
$$\varphi^{\prime \prime}_{q}(x)= \frac{{q}}{x^2} \cosh ({q} \log x) \big({q} - \tanh ({q} \log x) \big)= \frac{{q}}{x^2} \cosh ({q} \log x) \Bigg(\, \frac{2}{x^{2{q}} + 1} - (1 - {q}) \Bigg) .$$
$x_{q} = \Big(\frac{1+{q}}{1-{q}}\Big)^{1/(2{q})}$
, and (i) follows immediately.
-
(ii) Note that
$\varphi^{\prime \prime}_{q}$
is positive on
$[1, x_{q})$
and negative on
$(x_{q}, \, \infty)$
, which implies the existence of the inverse functions
$\ell_{q}$
and
$r_{q}$
as in (ii). As
$y \to 0^+$
,
$\ell_{{q}} (y) \to 1$
and
$r_{q} (y) \to \infty$
. Set
$\lambda = \ell_{q} (y) -1$
and observe that which implies the estimate in (ii) for
$$y = \varphi^\prime_{q} (1+ \lambda)= \frac{{q} \big((1+\lambda)^{{q}} - (1 + \lambda)^{-{q}} \big)}{2(1 + \lambda) }= {q}^2 \lambda \big(1+ \mathcal O_{q} (\lambda) \big),$$
$\ell_{q} (y)$
as
$y \to 0^+$
. Similarly, observe that which implies the estimate in (ii) for
$$y= \varphi^\prime_{q} \big( r_{q} (y)\big) \sim_{y\to 0^+} \frac{{q}}{r_{q} (y)} \frac{1}{2}r_{q} (y)^{{q}} \sim_{y\to 0^+} \frac{{q}}{2} r_{q} (y)^{-(1-{q})} ,$$
$r_{q} (y)$
as
$y \to 0^+$
.
-
(iii) Observe that
$\ell_{{q}} (M_{q})= r_{q} (M_{q})= x_{q}$
and, thus,
$\Phi_{q} (M_{q})= 0$
. Note for all
$y\in (0, \, M_{q})$
that
$\Phi^\prime_{q} (y)= y \big(r^\prime_{q} (y) - \ell^\prime_{q} (y) \big) < 0$
. Since
$\Phi_{q} (y) \sim \tfrac{1}{2} r_{q} (y)^{{q}} \sim \frac{1}{2} (2y/{q})^{-\frac{{q}}{1-{q}}}$
as
$y \to 0^+$
, the function
$\Phi_{q}\,:\, (0, \, M_{q}] \to \mathbb{R}_+$
is a
$C^1$
decreasing bijection and the estimates for
$\Phi_{q}^{-1} (x)$
,
$\ell_{q} (\Phi_{q}^{-1} (x))$
, and
$r_{q} (\Phi_{q}^{-1} (x))$
as
$x \to \infty$
are immediate consequence of the previous equivalence and of (ii). -
(iv) First note that
Thus,
$$2g(x) = x^{{q}}\big( (1 + ax^{-1})^{{q}} - 1\big) + x^{-{q}}\big((1 + ax^{-1})^{-{q}} - 1\big) \sim_{x\to \infty} {q} a x^{-(1-{q}) } .$$
$g > 0$
and
$g(x) \to 0$
as
$x \to \infty$
. Suppose there exists
$x^*\in [1, \, \infty)$
such that
$g^\prime (x^*)= 0$
. Then with
$y^* \;:\!=\; \varphi^\prime_{q} (x^*)$
, we have
$x^*= \ell_{q} (y^*)$
and
$x^*+a =r_{q} (y^*)$
. Let us check that
$g' <0$
on
$(x^*, \, \infty)$
. Assume there exists
$x^\prime >x^*$
such that
$g^\prime (x^\prime) \geq 0$
. Since which would imply that
\begin{eqnarray*} \frac{2}{{q}}g^\prime (x) &\,=\,& x^{-(1-{q})}\big( (1 + ax^{-1})^{-(1-{q})} - 1\big) - x^{-(1+{q})}\big((1 + ax^{-1})^{-(1+{q})} - 1\big) \\[5pt] & & \sim_{x\to \infty} -(1 - {q}) a x^{-(2-{q}) },\end{eqnarray*}
$g^\prime (x)< 0$
for all sufficiently large x. Therefore, there would exist
$x^{\prime \prime} \in [x^\prime , \, \infty)$
such that
$g^\prime (x^{\prime \prime})= 0$
. This would imply that
$x^{\prime \prime}= \ell_{q} (y^{\prime \prime})$
and
$x^{\prime \prime}+a =r_{q} (y^{\prime \prime})$
, with
$y^{\prime \prime} \;:\!=\; \varphi^\prime_{q} (x^{\prime \prime})$
. But
$y\mapsto r_{q} (y) - \ell_{q} (y)$
being increasing, we would have
$y^{\prime \prime}= y^*$
and, thus,
$x^*=x^{\prime \prime}$
, which would be absurd. Consequently,
$g^\prime (x) < 0$
for all
$x\in (x^*, \, \infty)$
, which proves (iv).
Proof of Lemma 8
Fix
${q}\in (0, \, {q}_0]$
. (i) Let
$a\in (0, \, {\textrm{e}}^{-1})$
and
$\theta \in [a, \, {\textrm{e}}^{1/\sqrt{{q}}}]$
. By definition,
$g_{q} (\theta) = (\theta + 1)^{q}\, [ 1 - {\textrm{e}}^{-{q} \log (\frac{1}{\theta} +1)}]$
. Since
$1 - {\textrm{e}}^{-x} \leq x$
(for
$x\in \mathbb{R}_+^*$
), this yields that
$g_{q} (\theta) \le (\theta + 1)^{q} {q} \log (1/a + 1)$
. By observing that
$\log (\frac{1}{a}+ 1) = \log \frac{1}{a} + \log (1+a) \leq \log (1/a) + 1 < 2\log (1/a)$
(for
$a\in (0, \, {\textrm{e}}^{-1})$
), we get that
On the other hand, let us write
Since
for
$u\in [0, \, 1]$
, we get that
Therefore,
By (4.25),
$(\theta + 1)^{q} \le ({\textrm{e}}^{1/\sqrt{{q}}} + 1)^{q} < \sqrt{2}$
. We claim that
$(\theta + 1) \min\{ \log (1/a), \,1/\theta \} \le 2 \log (1/a)$
. This is obvious if
$\theta < 1$
because in this case,
$(\theta + 1) \log (1/a) < 2 \log (1/a)$
; this is also obvious if
$\theta \ge 1$
, in which case
$(\theta + 1)(1/\theta) \le 2 < 2 \log (1/a)$
(recalling that
$a < {\textrm{e}}^{-1}$
). The inequality (4.29) is proved because
$4\sqrt{2} < 6$
.
(ii) Let
$r\in [1, \, 2]$
and
$\theta \in [{q}^3, \, {\textrm{e}}^{1/\sqrt{{q}}} ]$
. Then
\begin{align*} g_{q} \left(\frac{\theta}{r}\right) - g_{q} (\theta) & = {q}(1 -{q}) \int_0^1 \textrm{d} u \int_{r^{-1}\theta}^{\theta}\frac{\textrm{d} t }{(t+ u)^{2-{q}}} \\[5pt] &\ge {q}(1 - {q}) (r^{-1}\theta)^{{q}} \int_0^1 \textrm{d} u \int_{r^{-1}\theta}^{\theta}\frac{\textrm{d} t }{(t + u)^{2}} \\[5pt] &= {q}(1 - {q}) (r^{-1}\theta)^{{q}} \log \left( \frac{\theta + r}{\theta + 1} \right).\end{align*}
Since
$(1 - {q}) (r^{-1}\theta)^{{q}} \ge (1 - {q}) ({{q}^3}/{2})^{{q}} > 1- \sqrt{{q}}$
(by (4.27)). We next apply the inequality
$\log (1+x) \ge x - {x^2}/{2}$
, which holds true for all for
$x\in [0, \, 1]$
, to
$x\;:\!=\; (r - 1)/(\theta + 1)$
and we get
This yields (4.30) because
(iii) We set
$h_{q} (\theta) \;:\!=\; -g_{q}^\prime (\theta) = {q} \big(\theta^{{q}-1} - (\theta+1)^{{q} -1}\big) ={q} (1 - {q}) \int_\theta^{\theta+1} ({\textrm{d} w}/{w^{2-{q}}})$
. Then we get
\begin{eqnarray*} \int_0^\theta h_{q} (t) \big( g_{q} (\theta - t) - g_{q} (\theta) \big)\textrm{d} t &\,=\, & {q} \int_0^\theta \textrm{d} t \, h_{q} (t) \int_0^1 \textrm{d} u\, \big( (\theta + 1 - t - u)^{{q} -1} - (\theta + 1 - u)^{{q} -1}\big) \\[5pt] &\,=\, & {q} (1 - {q}) \int_0^\theta \textrm{d} t \, h_{q} (t) \int_0^1 \textrm{d} u \int_0^t \textrm{d} v\, \frac{(\theta + 1 - v - u)^{{q} }}{(\theta + 1 - v - u)^{2}} \\[5pt] & \,\le\, & {q} (\theta + 1)^{{q}} \int_0^\theta \textrm{d} t \, h_{q} (t) \int_0^1 \textrm{d} u \int_0^t \frac{\textrm{d} v}{(\theta + 1 - v - u)^{2}} .\end{eqnarray*}
For
$t\in (0, \, \theta]$
, we have
This leads to
where
By (4.25),
$(\theta + 1)^{2{q}} \le ({\textrm{e}}^{1/\sqrt{{q}}}+ 1)^{2{q}} \le 1 + 3\sqrt{{q}}$
.
It remains to check that
$J(\theta) \leq {\zeta(2)}/({\theta + 1})$
for all
$\theta \in \mathbb{R}_+^*$
. By definition,
$$J (\theta) = \int_0^\theta \frac{1}{t(t + 1)}\left( \log \left( 1 - \frac{t}{\theta + 1}\right) - \log \left(1 - \frac{t}{\theta}\right) \right) \, \textrm{d} t = \int_0^\theta \frac{-\log \left( 1 - \frac{\lambda^2t}{(1-\lambda)(1 - \lambda t )} \right)}{t(t + 1)} \, \textrm{d} t ,$$
where we have set
$\lambda \;:\!=\; {1}/({\theta + 1}) \in (0,\, 1]$
. By means of the change of variables
we get that
Since
this implies that
\begin{eqnarray*} \lambda \, \zeta (2) - J(\theta) & \,=\, & \int_0^1 \frac{\log \frac{1}{v}}{1 - v} \left( \lambda - \frac{\lambda^2}{1 - (1 - \lambda^2)v} \right) \, \textrm{d} v \\[5pt] & \,=\, & (1 - \lambda^2) \int_0^1 \frac{ \log \frac{1}{v}}{1 - (1 - \lambda^2)v} \textrm{d} v\; - \, (1 - \lambda) \int_0^1 \frac{\log \frac{1}{v}}{1 - v} \, \textrm{d} v.\end{eqnarray*}
For all
$r\in (0,\, 1)$
,
$$\int_{0}^1 \frac{\log \frac{1}{v} }{1 - rv} \, \textrm{d} v = \sum_{n\geq 0} r^n \int_0^1 v^n \log \frac{1}{v} \, \textrm{d} v = \frac{1}{r}\sum_{n\geq 0}\frac{r^{n+1}}{(n+1)^2}= \frac{1}{r} \int_0^r \frac{\log \frac{1}{1-v}}{v} \, \textrm{d} v .$$
Therefore,
$\lambda \, \zeta (2) - J(\theta) = K(\lambda)$
, where
We want to prove that
$K (x)\ge 0$
for all
$x\in (0, \, 1]$
. Since
$K(0) = K( 1) = 0$
, it suffices to show that K is concave:
where
$y \;:\!=\; -\log x \in \mathbb{R}_+$
. This proves that
for all
$\theta\in \mathbb{R}_+$
, which yields (4.31).
Acknowledgements
We wish to thank an anonymous referee for their comments that helped improve the first version of the article.
Funding information
This work was partially supported by NSFC grant No. 12271351.
Competing interests
There were no competing interests to declare which arose during the preparation or publication process of this article.




























