1 Introduction
In free probability theory, the Bercovici–Pata bijection, which connects infinitely divisible laws with free counterparts, is often discussed in relation to its correspondence with classical probability [Reference Barndorff-Nielsen and ThorbjørnsenBNT06]. A recurring issue in this context is the inadequate correspondence of the gamma distribution, which has hindered a comprehensive understanding of its free counterpart. The image of the class of classical gamma distributions under the Bercovici–Pata bijection was introduced by Pérez-Abreu and Sakuma [Reference Pérez-Abreu and SakumaPAS08], but its distributional properties are not well understood (see, e.g., [Reference Haagerup and ThorbjørnsenHT14]).
Meanwhile, an alternative approach was developed independently, focusing on orthogonal polynomials to investigate the properties of the “free” gamma distribution [Reference AnshelevichAns03, Reference Bożejko and BrycBB06]. However, the “free” gamma distribution remains relatively underexplored, particularly in terms of its interpretation and characterization, leaving significant scope for further investigation. This gap may stem from inadequate parameter consideration and the lack of a proper examination of the distribution family as a whole. In this article, we introduce a new class of probability distributions that extends the framework of the “free” gamma distribution. By refining the parameterization and exploring novel correspondences, we aim to overcome the limitations of prior research and provide deeper insights into the structure and potential applications of these distributions.
Before providing explanations, we introduce the notation used in this article. We denote by
${\mathcal {P}}(K)$
the set of all Borel probability measures on
$\mathbb {R}$
whose support is contained in
$K\subset \mathbb {R}$
. In this article, we frequently take K to be the real line
$\mathbb {R}$
, the nonnegative real line
$\mathbb {R}_{\ge 0}:=[0,\infty )$
, or the positive real line
$\mathbb {R}_{>0}:=(0,\infty )$
. Moreover, we adopt the following notational conventions.
-
• (Dilation) For
$\mu \in \mathcal {P}(\mathbb {R})$
and
$c \neq 0$
, we define
$D_c(\mu )$
as the measure given by
$$\begin{align*}D_c(\mu)(B) := \mu(\{x/c : x \in B\}), \qquad \text{for all Borel sets } B \subset \mathbb{R}. \end{align*}$$
-
• (Power of measure) For
$\mu \in \mathcal {P}(\mathbb {R}_{\ge 0})$
and
$c> 0$
, we define
$\mu ^{\langle c \rangle }$
by
$$\begin{align*}\mu^{\langle c \rangle}(B) := \mu(\{x^{1/c} : x \in B\}), \qquad \text{for all Borel sets } B \subset \mathbb{R}_{\ge 0}. \end{align*}$$
-
• (Reversed measure) For
$\mu \in \mathcal {P}(\mathbb {R}_{\ge 0})$
with
$\mu (\{0\}) = 0$
, we define
$\mu ^{\langle -1 \rangle }$
by
$$\begin{align*}\mu^{\langle -1 \rangle}(B) := \mu(\{x^{-1} : x \in B\}), \qquad \text{for all Borel sets } B \subset \mathbb{R}_{>0}. \end{align*}$$
Moreover, for
$w \in \mathbb {C}\setminus \{0\}$
, we define
$\sqrt {w} := |w|^{\frac {1}{2}} e^{i\frac {\arg (w)}{2}}$
with
$\arg (w) \in (0,2\pi )$
.
First, we briefly recall the definition of the classical gamma distribution, which is widely used in classical probability theory. The classical gamma distribution is a two-parameter family of probability distributions, defined as
where t is the shape parameter and θ is the mean (or scale) parameter. The gamma distribution is known to be infinitely divisible, and its characteristic function can be expressed in terms of the Lévy–Khintchine representation as
$$ \begin{align*}\int_{\mathbb{R}} e^{izx} \gamma_{t, \theta}(\mathrm{d}x) = \exp \left[ \int_{(0,\infty)} \left(e^{izx} - 1\right) \frac{t e^{-\frac{x}{\theta}}}{x} \, \mathrm{d} x \right], \qquad z \in \mathbb{R,} \end{align*} $$
where the Lévy measure is given by
It includes many distributions, such as the exponential distribution, Erlang distribution, and chi-squared distribution. In addition, the following properties hold:
-
•
$\gamma _{1, \theta }^{\ast t} =\gamma _{t, \theta }$
for all
$t,\theta>0$
; -
•
$\gamma _{t_{1}, \theta }\ast \gamma _{t_{2}, \theta } = \gamma _{t_{1}+t_{2},\theta }$
for all
$t_1,t_2,\theta>0$
; -
•
$D_{\theta }(\gamma _{t,1})=\gamma _{t,\theta }$
for all
$t,\theta>0$
.
The first property motivates the consideration of the gamma Lévy process, where the shape parameter can also be interpreted as a “time parameter” in the context of stochastic processes. The second property is known as the reproductive property of probability distributions. The third property shows that
$\theta $
is a scale parameter for gamma distributions. Moreover, the gamma distribution has been extensively characterized in the literature (see, for example, [Reference BondessonBon92, Reference MoschopoulosMos85, Reference SatoSat13]). Thus, the classical gamma distribution has attracted significant interest from many fields, such as probability, mathematical statistics, Bayesian statistics, econometrics, queueing theory, and so on.
Given the richness of its structure, it was a natural step to investigate its analog in free probability theory. In 2003, Anshelevich introduced the free gamma distribution as a subfamily of the free Meixner class (see [Reference AnshelevichAns03, p. 238]). More precisely, the Meixner-type free gamma distribution,Footnote
1
denoted by
$\eta _{t,\theta }$
, is defined as the probability measure whose R-transformFootnote
2
is given by
where
$\lambda>0$
is the time (or shape) parameter,
$\theta>0$
is the mean (or scale) parameter, and the function
$k_{\lambda ,\theta }$
is defined as
$$ \begin{align*}k_{\lambda,\theta}(x):= \frac{\sqrt{(a^+-x)(x-a^-)}}{2\pi \theta x} \mathbf{1}_{(a^-,a^+)}(x), \qquad \lambda \ge1, \end{align*} $$
with
$a^\pm :=\theta (\sqrt {\lambda }\pm 1)^2$
. For all
$t,\theta>0$
, the measure
$\eta _{t,\theta }$
is freely infinitely divisible, which serves as the analog of infinite divisibility in free probability. Moreover, the following properties hold:
-
•
$\eta _{1,\theta }^{\boxplus t}=\eta _{t,\theta } $
for all
$t,\theta>0$
; -
•
$\eta _{t_{1},\theta } \boxplus \eta _{t_{2},\theta } = \eta _{t_1+t_2,\theta }$
for all
$t_1,t_2,\theta>0$
; -
•
$D_\theta (\eta _{t,1})=\eta _{t,\theta }$
for all
$t,\theta>0$
,
where
$\boxplus $
denotes the free additive convolution, and
$\mu ^{\boxplus t}$
is the free convolution power of
$\mu \in {\mathcal {P}}(\mathbb {R})$
. See Section 2.1 or [Reference Bercovici and VoiculescuBV93] for the above concepts in free probability.
In this article, we introduce a generalized family of probability measures that includes the Meixner-type free gamma distributions
$\eta _{t,\theta }$
. Specifically, we consider the family
$\{\mu _{t, \theta , \lambda } : t, \theta> 0,\ \lambda \ge 1\} \subset {\mathcal {P}}(\mathbb {R}_{\ge 0})$
, where each measure is defined via its R-transform as
We call the measure
$\mu _{t, \theta , \lambda }$
the generalized Meixner-type free gamma distribution. It is straightforward to verify that
and therefore the family indeed extends the class of Meixner-type free gamma distributions. Moreover, the parameter t admits a natural interpretation as a free convolution power:
Note that the measure
$\mu _{t,\theta ,\lambda }$
coincides with the centered free Meixner distribution, up to a shift (see Section 2.2 and Proposition 3.2). Below, we outline the structure of the article and summarize our main results.
In Section 3, we study various distributional properties of the generalized Meixner-type free gamma distributions, including their density, existence of atoms, and moments (see Section 3.2), as well as free self-decomposability and unimodality (see Section 3.3). Furthermore, we study the free Lévy processes related to the measures
$\mu _{t,\theta ,\lambda }$
in Sections 3.4 and 3.5.
In Section 4, we derive formulas involving the free multiplicative convolution
$\boxtimes $
.
Theorem 1.1 (Free convolution formula for the measure
$\mu _{t,\theta ,\lambda }$
, see Theorem 4.3)
Let
$t,\theta>0$
and
$\lambda \ge 1$
. Then, the following properties hold:
-
(1) For
$\lambda> 1$
, the measure
$\mu _{t,\theta ,\lambda }$
can be expressed in two equivalent forms: where
$$ \begin{align*} \mu_{t,\theta,\lambda} = D_{t(\lambda-1)} \left(\pi_{q,1} \boxtimes (\pi_{1+\frac{t}{\theta},1})^{\langle -1 \rangle}\right) = \mu_{t,\theta,1} \boxtimes \pi_{q,q^{-1}}, \end{align*} $$
$q = \frac {t}{\theta (\lambda -1)}$
and
$\pi _{\lambda ,\theta }$
is the probability measure defined by
$$\begin{align*}\pi_{\lambda,\theta}(\mathrm{d}x) = \max\{0, 1-\lambda\}\delta_0 + k_{\lambda,\theta}(x)\,\mathrm{d}x. \end{align*}$$
-
(2) In particular,
$\mu _{t,\theta ,1+t/\theta }= \mu _{t,\theta ,1} \boxtimes \pi _{1,1}$
. Hence, the measure
$\mu _{t,\theta ,1+t/\theta }$
belongs to the class of free compound Poisson distributions.
According to Theorem 1.1(1), for
$\lambda> 1$
, the measure
$\mu _{t, \theta , \lambda }$
coincides with a suitably scaled free beta prime distribution
$f\beta '(a, b)$
introduced in [Reference YoshidaYos20]. As a consequence, various properties of the free beta prime distribution can be derived from the results established for
$\mu _{t, \theta , \lambda }$
(see Section 4.3 for further details).
In Section 5, for
$t,\theta>0$
and
$\lambda \ge 1$
, we consider the potential function
$V_{t,\theta ,\lambda }$
defined by
$$ \begin{align*} V_{t,\theta,\lambda} (x):= \begin{cases} \left(2+\dfrac{t}{\theta}\right) \log x + \dfrac{t^2}{\theta x}, & \lambda=1,\\ \left(1-\dfrac{t}{\theta(\lambda-1)} \right) \log x + \left( 1 + \dfrac{t\lambda}{\theta(\lambda-1)}\right) \log (x+t(\lambda-1)), & \lambda>1. \end{cases} \end{align*} $$
We first derive the explicit form of the Gibbs measure
associated with the potential
$V_{t,\theta ,\lambda }$
, where the normalization constant is given by
$\mathcal {Z}_{t,\theta ,\lambda }= \int \exp \{- V_{t,\theta ,\lambda }(x) \}\mathrm {d}x$
.
-
• For
$\lambda =1$
, we obtain
$$ \begin{align*}\rho_{t,\theta,1}(\mathrm{d}x) = \frac{(\frac{t^2}{\theta})^{1+\frac{t}{\theta}}}{\Gamma(1+\frac{t}{\theta})} x^{-(2+ \frac{t}{\theta})} e^{-\frac{t^2}{\theta x}} \mathrm{d}x.\end{align*} $$
-
• For
$\lambda>1$
, we get where
$$ \begin{align*}\rho_{t,\theta,\lambda}(\mathrm{d}x)= \frac{(t(\lambda-1))^{1+\frac{t}{\theta}}}{B(\frac{t}{\theta(\lambda-1)},1+\frac{t}{\theta})} x^{-1 + \frac{t}{\theta(\lambda-1)}} \left(x+t(\lambda-1)\right)^{-1-\frac{t\lambda}{\theta(\lambda-1)}}\mathrm{d}x, \end{align*} $$
$B(a,b)$
denotes the beta function with parameter
$a,b>0$
.
The measure
$\rho _{t,\theta ,\lambda }$
coincides with the beta prime distribution when
$\lambda> 1$
. This close resemblance between classical and free analogs is striking and highlights deep structural parallels.
Theorem 1.2 (Convolution formula for the Gibbs measure
$\rho _{t,\theta ,\lambda }$
, see Theorem 5.2)
Let us consider
$t,\theta>0$
and
$\lambda \ge 1$
. Then, the following properties hold:
-
(1) For
$\lambda>1$
, we have where
$$ \begin{align*} \rho_{t,\theta, \lambda} &= D_{t(\lambda-1)} \left( \gamma_{q,1} \circledast (\gamma_{1+\frac{t}{\theta},1})^{\langle -1\rangle}\right) =\rho_{t,\theta,1} \circledast \gamma_{q,q^{-1}}, \end{align*} $$
$q=\frac {t}{\theta (\lambda -1)}$
and
$\circledast $
is the classical multiplicative convolution.
-
(2) In particular,
$\rho _{t,\theta , 1+t/\theta }= \rho _{t,\theta ,1} \circledast \gamma _{1,1}$
. Hence, the measure
$\rho _{t,\theta , 1+t/\theta }$
belongs to the class of mixture of exponential distributions.
Next, we show that for all
$t, \theta> 0$
and
$1 \le \lambda < 1 + t/\theta $
, the measure
$\mu _{t,\theta ,\lambda }$
uniquely maximizes the free entropy associated with the potential
$V_{t,\theta ,\lambda }$
. In other words,
$\mu _{t,\theta ,\lambda }$
serves as the equilibrium measure in this variational framework.
Theorem 1.3 (Free entropy associated with
$V_{t,\theta ,\lambda }$
, see Theorem 5.4)
For
$t,\theta>0$
and
$1\le \lambda <1+ t/\theta $
, we have
where
$\Sigma _{V_{t,\theta ,\lambda }}(\mu )$
is the (Voiculescu’s) free entropy:
In particular, for
$a,b>1$
, the measure
$f\beta '(a,b)$
is the unique maximizer of the free entropy
$\Sigma _{V_{a,b}}$
, where
see also Corollary 5.6.
In Section 6, we investigate algebraic properties of noncommutative random variables
$G^{(p)} \sim \eta _{p,1}$
, which we refer to as Meixner-type free beta–gamma algebras. Consider freely independentFootnote
3
noncommutative random variables
$G_1^{(p)} \sim \eta _{p,1}$
and
$G_2^{(q)} \sim \eta _{q,1}$
. Since the Meixner-type free gamma distributions satisfy the convolution identity
$\eta _{p,1}\boxplus \eta _{q,1}= \eta _{p+q,1}$
, we obtain the following distributional identity:
where
$X \overset {\mathrm {d}}{=} Y $
denotes equality in distribution. This identity highlights the additive stability of the Meixner-type free gamma distributions under free convolution. It is worth noting that the three parameters
$(t, \theta , \lambda )$
of the generalized Meixner-type free gamma distributions play a crucial role in understanding the algebraic structure of the random variables
$G^{(p)}$
.
Theorem 1.4 (Meixner-type free beta–gamma algebras, see Section 6)
We denote by
$G^{(p)}\sim \eta _{p,1}$
a noncommutative random variable and let us set free copies
$\{G_1^{(p)},G_2^{(p)},\dots \}$
from
$G^{(p)}$
for each
$p>0$
. Then,
-
•
$(G_2^{(q)})^{-\frac {1}{2}} G_1^{(p)} (G_2^{(q)})^{-\frac {1}{2}} \sim \eta _{p,1}\boxtimes (\eta _{q,1})^{\langle -1 \rangle } = D_{\frac {1+q}{q^2}}\left ( \mu _{p,1,1+\frac {p}{1+q}}\right )$
for
$p,q>0$
. -
• For any
$p>0$
and
$n\in \mathbb {N}$
,
$$ \begin{align*}\left(\frac{1}{G_1^{(p)}}+\frac{1}{G_2^{(p)}}+\cdots+ \frac{1}{G_{2^n}^{(p)}}\right)^{-1} \overset{\mathrm{d}}{=}\left(2^n+\frac{2^n-1}{p}\right)^{-2} G^{(2^np+2^n-1)}. \end{align*} $$
-
• In the case of
$p=\frac {1}{2(m-1)}$
for some natural number
$m\ge 2$
, we have
$$ \begin{align*}\left(\frac{1}{G_1^{(p)}}+\frac{1}{G_2^{(p)}}\right)^{-1} \overset{\mathrm{d}}{=} \frac{1}{4m^2} (G_1^{(2p)} + \cdots + G_m^{(2p)}). \end{align*} $$
-
• The law
$\mu _p$
of the Meixner-type free beta random variable has the following R-transform:
$$ \begin{align*}B^{(p)}:=\{(G_1^{(p)})^{-1} +(G_2^{(p)})^{-1}\}^{-\frac{1}{2}} (G_1^{(p)})^{-1} \{(G_1^{(p)})^{-1} +(G_2^{(p)})^{-1}\}^{-\frac{1}{2}} \end{align*} $$
$$ \begin{align*}R_{\mu_p}(z)= \frac{p(z-p^3)- \sqrt{(3p+2)^2 z^2 -2p^5 z + p^8}}{2z}, \qquad z \in \left(-\frac{p^3}{2(p+1)},0\right). \end{align*} $$
Moreover,
$\mu _p$
is not freely infinitely divisible for any
$p>0$
.
In Section 7, using the method of finite free probability, which is an approximation theory in free probability that has attracted attention in recent years (see [Reference MarcusMar21, Reference Marcus, Spielman and SrivastavaMSS22]), we demonstrate that the asymptotic behavior of the roots of some Jacobi and Bessel polynomials, as their degree becomes sufficiently large, can be understood through the generalized Meixner-type free gamma distributions. The key point that aids understanding is the use of the “finite S-transform” recently introduced by the second author [Reference Arizmendi, Fujie, Perales and UedaAFPU24]. Specifically, it involves investigating the relationship between the finite S-transform of Jacobi polynomials or Bessel polynomials and the S-transform of the generalized Meixner-type free gamma distribution derived in Section 4.
2 Preliminaries
2.1 Harmonic analysis in free probability
In this article, we employ the framework of free harmonic analysis as introduced by [Reference Bercovici and VoiculescuBV93] (see also Chapter 3 in [Reference Mingo and SpeicherMS17]). To compare with classical probability theory, infinite divisibility is often used since there is the Bercovici–Pata bijection, a mapping that relates classical infinitely divisible distributions to their free counterparts (see [Reference Barndorff-Nielsen and ThorbjørnsenBNT02, Reference Barndorff-Nielsen and ThorbjørnsenBNT06, Reference Bercovici and PataBP99] for details). From this perspective, we introduce the necessary tools for our analysis in the following sections.
A probability measure
$\mu $
on
$\mathbb {R}$
is called freely infinitely divisible if for any
$n\in \mathbb {N}$
there exists a probability measure
$\mu _{n}\in {\mathcal {P}}(\mathbb {R})$
such that
where
$\boxplus $
denotes the free additive convolution, which can be defined as the distribution of sum of freely independent self-adjoint operators. In this case,
$\mu _n\in {\mathcal {P}}(\mathbb {R})$
is uniquely determined for each
$n\in \mathbb {N}$
. The freely infinite divisible distributions can be characterized as those admitting a Lévy–Khintchine representation in terms of R-transform, which is the free analog of the cumulant transform
$C_{\mu }(z) := \log (\widehat {\mu }(z))$
, where
$\widehat {\mu }$
is the characteristic function of
$\mu \in {\mathcal {P}}(\mathbb {R})$
. This was originally established by Bercovici and Voiculescu in [Reference Bercovici and VoiculescuBV93] for all Borel probability measures. To explain it, we gather analytic tools for free additive convolution
$\boxplus $
. In order to define the R-transform (or free cumulant transform)
$R_\mu $
of
$\mu \in {\mathcal {P}}(\mathbb {R})$
, we need to define its Cauchy–Stieltjes transform
$G_\mu $
:
Note, in particular, that
$\Im (G_\mu (z))<0$
for any z in
$\mathbb {C}^+$
, and hence we may consider the reciprocal Cauchy transform
$F_\mu \colon \mathbb {C}^{+}\to \mathbb {C}^{+}$
given by
$F_{\mu }(z)=1/G_{\mu }(z)$
. For any
$\mu \in {\mathcal {P}}(\mathbb {R})$
and any
$\lambda>0,$
there exist positive numbers
$\alpha ,\beta ,$
and M such that
$F_{\mu }$
is univalent on the set
$\Gamma _{\alpha ,\beta }:=\{z \in \mathbb {C}^{+} \,|\, \Im (z)>\beta , |\Re (z)|<\alpha \Im (z)\}$
and such that
$F_{\mu }(\Gamma _{\alpha ,\beta })\supset \Gamma _{\lambda ,M}$
. Therefore, the right inverse
$F^{\langle -1 \rangle }_{\mu }$
of
$F_{\mu }$
exists on
$\Gamma _{\lambda ,M}$
, and the R-transform (or free cumulant transform)
$ R_\mu $
is defined by
The free version of the Lévy–Khintchine representation now amounts to the statement that
$\mu \in {\mathcal {P}}(\mathbb {R})$
is freely infinitely divisible if and only if there exist
$a\ge 0$
,
$\gamma \in \mathbb {R}$
and a Lévy measureFootnote
4
$\nu $
such that
The triplet
$(a,\nu ,\gamma )$
is uniquely determined and referred to as the free characteristic triplet for
$\mu $
, and the measure
$\nu $
is referred to as the free Lévy measure for
$\mu $
. Recently, free infinite divisibility has been proved for normal distributions [Reference Belinschi, Bożejko, Lehner and SpeicherBBLS11], some of the Boolean-stable distributions [Reference Arizmendi and HasebeAH14], some of the beta distributions, and some of the gamma distributions, including the chi-square distribution and powers of random variables distributed as these distributions [Reference HasebeHas14, Reference HasebeHas16] and generalized power distributions with free Poisson term [Reference Morishita and UedaMU20].
As one of the most important subclass of freely infinitely divisible distributions, we introduce the concepts of freely self-decomposable distributions. A probability measure
$\mu $
on
$\mathbb {R}$
is said to be freely self-decomposable if for any
$c\in (0,1),$
there exists
${\mu _c \in \mathcal {P}(\mathbb {R})}$
such that
$\mu = \mu _c \boxplus D_c(\mu )$
. It is easy to see that every freely self-decomposable distribution is freely infinitely divisible. Moreover, it is known that
$\mu \in \mathcal {P}(\mathbb {R})$
is freely self-decomposable if and only if its free Lévy measure
$\nu $
is formed by
$$ \begin{align*}\nu(\mathrm{d}x) = \frac{k(x)}{|x|} \mathbf{1}_{\mathbb{R}\setminus \{0\}}(x) \mathrm{d}x, \end{align*} $$
where a function k is nondecreasing on
$(-\infty ,0)$
and nonincreasing on
$(0,\infty )$
(see [Reference Barndorff-Nielsen and ThorbjørnsenBNT02] for details). Examples and properties of freely self-decomposable distributions were investigated by [Reference Hasebe, Sakuma and ThorbjørnsenHST19, Reference Hasebe and ThorbjørnsenHT16, Reference Hasebe and UedaHU23, Reference Maejima and SakumaMS23].
Let us consider
$\mu \in \mathcal {P}(\mathbb {R}_{\ge 0}) \setminus \{\delta _0\}$
. We define the S-transform of
$\mu $
by
where
$\Psi _\mu $
is the moment generating function that is
and
$\Psi _\mu (i\mathbb {C}^+)$
is a region contained in the circle with diameter
$(\mu (\{0\})-1,0)$
. One can see that
For
$\mu ,\nu \in \mathcal {P}(\mathbb {R}_{\ge 0})\setminus \{\delta _0\}$
, we obtain
$S_{\mu \boxtimes \nu }=S_\mu S_\nu $
on the common domain in which three S-transforms are defined, where
$\mu \boxtimes \nu $
is called the free multiplicative convolution, which is the distribution of multiplication
$\sqrt {X}Y\sqrt {X}$
of freely independent positive random variables
$X\sim \mu $
and
$Y\sim \nu $
. It is known that
for small enough z in a neighborhood of
$(\mu (\{0\})-1,0)$
(see [Reference Bercovici and VoiculescuBV92, Reference Bercovici and VoiculescuBV93] for details).
Example 2.1 (Marchenko–Pastur distribution)
We define the function
$k_{\theta ,\lambda }$
as
$$ \begin{align*}k_{\lambda,\theta}(x):= \frac{\sqrt{(a^+-x)(x-a^-)}}{2\pi \theta x} \mathbf{1}_{(a^-,a^+)}(x), \end{align*} $$
where
$\lambda ,\theta>0$
and
$a^\pm :=\theta (\sqrt {\lambda }\pm 1)^2$
, respectively. The Marchenko–Pastur law
$\pi _{\lambda ,\theta }$
with the shape parameter
$\lambda $
and the mean parameter
$\theta $
is defined as the probability measure given by
It is known that
and
From the form of R-transform, we notice that, for any
$\theta ,\lambda>0$
,
According to [Reference Haagerup and SchultzHS07, Proposition 3.13], we have
Example 2.2 (Free positive stable law with index
$1/2$
)
By (2.4), for
$\lambda \ge 1$
,
In particular, the measure
$(\pi _{1,1})^{\langle -1\rangle }$
has a probability density
$\frac {\sqrt {4x-1}}{2\pi x^2} \mathbf {1}_{(1/4,\infty )}(x) \mathrm {d} x$
and is well known as the free positive stable law with index
$1/2$
, introduced by [Reference Bercovici and PataBP99].
2.2 Centered free Meixner distributions
In this section, we introduce the three-parameter family
$\{\nu _{s,a,b}: s\ge 0, a \in \mathbb {R}, b\ge -1\} \subset {\mathcal {P}}(\mathbb {R})$
in which their Cauchy transform is given by
$$ \begin{align} G_{\nu_{s,a,b}}(z) &= \cfrac{1}{z-\cfrac{s}{z-a-\cfrac{s+b}{z-a-\cfrac{s+b}{\ddots}}}}\end{align} $$
$$ \begin{align} &= \frac{(s+2b)z+sa-s\sqrt{(z-a)^2-4(s+b)}}{2(bz^2+saz+s^2)}. \end{align} $$
The measure
$\nu _{s,a,b}$
is called the centered free Meixner distribution. According to [Reference AnshelevichAns03, Reference Bożejko and BrycBB06, Reference Saitoh and YoshidaSY01],
$\nu _{s,a,b}$
is freely infinitely divisible whenever
$b\ge 0$
, and an integral representation for the R-transform of
$\nu _{s, a,b}$
is given by
where
is the density of Wigner’s semicircle law with mean
$a\in \mathbb {R}$
and variance
$b\ge 0$
. Note that, the above R-transform differs from [Reference AnshelevichAns03, Reference Bożejko and BrycBB06, Reference Saitoh and YoshidaSY01] by a factor of z. In this case, one can see that
$\nu _{s,a,b}= \nu _{1,a,b}^{\boxplus s}$
. By [Reference Bożejko and BrycBB06, Equation (4)], the R-transform of
$\nu _{s,a,b}$
admits the explicit form
$$ \begin{align} R_{\nu_{s,a,b}}(z)=\frac{2sz^2}{1-az+\sqrt{(1-az)^2-4bz^2}}, \qquad b\neq 0 \end{align} $$
and in the case
$b=0$
, it reduces to
According to [Reference Bożejko and BrycBB06, Theorem 3.2], the centered free Meixner law
$\nu _{1,a,b}$
coincides with one of the following measures:
-
• the Wigner’s semicircle law if
$a=b=0$
; -
• the Marchenko–Pastur distribution if
$b=0$
and
$a\neq 0$
; -
• the free Pascal (negative binomial) distribution if
$b>0$
and
$a^2>4b$
; -
• the free gamma distribution if
$b>0$
and
$a^2=4b$
; -
• the pure free Meixner distribution if
$b>0$
and
$a^2<4b$
; -
• the free binomial distribution if
$-\min \{\alpha , 1-\alpha \} \le b <0$
, where
$\alpha =\int _{\mathbb {R}} x^2\ \nu _{1,a,b}(\mathrm { d} x)$
.
By (1.1) and (2.7), the Meixner-type free gamma distribution
$\eta _{t,\theta }$
can be expressed in terms of the centered free Meixner law (the free gamma distribution in the sense described above) as
More generally, we show that the generalized Meixner-type free gamma distribution can be represented as a centered free Meixner law under a shift (see Proposition 3.2).
2.3 Entropy functionals with potentials
Assume that V is a
$C^1$
-potential function V satisfying
and
$\mathcal {Z} := \int e^{-V(x)}\mathrm {d}x<\infty $
.
By the Lagrangian multiplier method, it is known that the Gibbs distribution
$\frac {1}{\mathcal {Z}} \exp \{-V(x)\}$
is a unique probability density which maximizes the Shannon entropy associated with the potential function V:
among all probability density functions p on
$\mathbb {R}$
.
According to [Reference JohanssonJoh98], it is known that for the above potential function V, the free entropy functional (see [Reference VoiculescuVoi93])
among all probability measures
$\mu $
on
$\mathbb {R}$
, is known to be finite and has a unique maximizer
$\mu _V$
(namely, the equilibrium measure of
$\Sigma _V$
). The support of
$\mu _V$
is compact. Moreover,
$\mu _V$
satisfies the following equation:
where
$\mathcal {H}\mu $
is the Hilbert transform of a probability measure
$\mu $
on
$\mathbb {R}$
, that is,
see also [Reference Saff and TotikST97, p. 27, Theorem 1.3] and [Reference BianeBia03, Equation (3.4)].
In connection with the above discussion, Hasebe and Szpojankowski [Reference Hasebe and SzpojankowskiHS19] pointed out a correspondence between the measure that maximizes the Shannon entropy and the equilibrium measure of the free entropy, from the perspective of maximizing entropy functionals with a potential. We call it the potential correspondence
Footnote
5
in this article. In [Reference Hasebe and SzpojankowskiHS19], it was observed that the potential correspondence maps the classical generalized inverse Gaussian (GIG) distributions to the free GIG distributions introduced in [Reference FèralFer06]. Moreover, this correspondence maps the normal distributions
$N(\mu ,\sigma ^2)$
to Wigner’s semicircle laws
$w_{\mu ,\sigma ^2}(x)\mathrm {d} x$
for
$\mu \in \mathbb {R}$
and
$\sigma>0$
, and the gamma distribution
$\gamma _{\lambda ,\theta }$
to the Marchenko–Pastur distributions
$\pi _{\lambda ,\theta }$
for
$\theta>0$
and
$\lambda \ge 1$
.
3 Generalized Meixner-type free gamma distributions
Recall the definition of generalized Meixner-type free gamma distributions.
Definition 3.1 Consider
$t,\theta>0$
and
$\lambda \ge 1$
. The generalized Meixner-type free gamma distribution
$\mu _{t,\theta ,\lambda }$
is the probability measure whose R-transform is given by
Here,
$k_{\lambda ,\theta }(x)$
denotes the density of the Marchenko–Pastur distribution, explicitly given by
$$ \begin{align*}k_{\lambda,\theta}(x):= \frac{\sqrt{(a^+-x)(x-a^-)}}{2\pi \theta x} \mathbf{1}_{(a^-,a^+)}(x), \end{align*} $$
with
$a^\pm :=\theta (\sqrt {\lambda }\pm 1)^2$
.
3.1 Relation with centered free Meixner distributions
We establish an important connection between the centered free Meixner distributions
$\nu _{s,a,b}$
(defined in Section 2.2) and the generalized Meixner-type free gamma distributions
$\mu _{t,\theta ,\lambda }$
.
Proposition 3.2 For
$t,\theta>0$
and
$\lambda \ge 1$
, we have
Proof A direct computation shows that
$$ \begin{align*} R_{\nu_{t\theta\lambda, \theta(\lambda+1), \theta^2\lambda} \boxplus \delta_t}(z) &= R_{{\nu_{t\theta\lambda, \theta(\lambda+1), \theta^2\lambda}} } (z) + tz\\ &=\int_{\mathbb{R}} \frac{z^2}{1-zx} \cdot t\theta\lambda \cdot w_{\theta(\lambda+1),\theta^2\lambda}(x)\mathrm{d}x + tz\\ &=\int_{\mathbb{R}} \frac{xz^2}{1-zx} \cdot t k_{\lambda,\theta}(x)\mathrm{d}x+tz\\ &=tz \int_{\mathbb{R}} \frac{1}{1-zx} k_{\lambda,\theta}(x)\mathrm{d}x\\ &=\int_{\mathbb{R}} \left(\frac{1}{1-zx}-1\right) \frac{tk_{\lambda,\theta}(x)}{x} \mathrm{d}x = R_{\mu_{t,\theta,\lambda}}(z), \end{align*} $$
as desired.
Remark 3.3 According to Proposition 3.2, the generalized Meixner-type free gamma distribution
$\mu _{t,\theta ,\lambda }$
coincides, up to a shift, with the centered free Meixner distribution
$\nu _{s,a,b}$
when the parameters satisfy
In this setting, since
$a^2\ge 4b$
, the measure
$\nu _{s,a,b}$
is either the free Pascal distribution (when
$a^2>4b$
) or the free gamma distribution (when
$a^2=4b$
) (see [Reference Bożejko and BrycBB06, p. 65]). Consequently, the measure
$\mu _{t,\theta ,\lambda }$
can be interpreted as a suitably shifted version of either the free Pascal or the free gamma distribution.
It follows from (2.7) and Proposition 3.2 that
$$ \begin{align} R_{\mu_{t,\theta,\lambda}}(z) =t \cdot \frac{1+\theta(1-\lambda)z -\sqrt{(1+\theta(1-\lambda)z)^2-4\theta z}}{2\theta}. \end{align} $$
Due to the above representation of R-transform, we can understand the second parameter
$\theta $
for the measure
$\mu _{t,\theta ,\lambda }$
. Since
we get
We will explain the meaning of the third parameter
$\lambda $
in Theorem 4.3.
3.2 Density, atom, and moments
In this section, we investigate the density, atom, and moments of
$\mu _{t,\theta ,\lambda }$
. Thanks to (2.6) and Proposition 3.2, it is straightforward to see that
$$ \begin{align*}G_{\mu_{t,\theta,\lambda}}(z)=\frac{(t+2\theta)z-t(t-\theta(\lambda-1)) -t \sqrt{(z-\alpha^-)(z-\alpha^+)}}{2\theta z (z+t(\lambda-1))}, \qquad z\in \mathbb{C}^+, \end{align*} $$
where
The Stieltjes-inversion formula (see [Reference SchmüdgenSch12, Theorem F.6]) implies that
$$ \begin{align} \frac{\mathrm{d}\mu_{t,\theta,\lambda}}{\mathrm{d}x}(x) = \frac{t\sqrt{(x-\alpha^-)(\alpha^+-x)}}{2\pi \theta x (x +t(\lambda-1))} \mathbf{1}_{[\alpha^-,\alpha^+]}(x). \end{align} $$
Since
$$ \begin{align*} \lim_{z\to 0} z G_{\mu_{t,\theta,\lambda}}(z) = \frac{-t(t-\theta(\lambda-1)) + t|t-\theta(\lambda-1)|}{2t\theta(\lambda-1)}, \end{align*} $$
we have
$$ \begin{align} \mu_{t,\theta,\lambda}(\{0\}) = \begin{cases} 0, & 1 \le \lambda \le 1+t/\theta\\ \\ 1-\dfrac{t}{\theta(\lambda-1)}, & \lambda> 1+t/\theta. \end{cases} \end{align} $$
In particular,
$\mu _{t,\theta ,\lambda }$
has no singular continuous part since it is freely infinitely divisible (see [Reference Belinschi and BercoviciBB04, Theorem 3.4]). We summarize the above result as follows.
Proposition 3.4 (Density and atom)
For
$t,\theta>0$
and
$\lambda \ge 1$
, we get
$$ \begin{align*}\mu_{t,\theta,\lambda}(\mathrm{d} x) = \max\left\{0, 1-\dfrac{t}{\theta(\lambda-1)}\right\} \delta_0 (\mathrm{d} x) + \frac{t\sqrt{(x-\alpha^-)(\alpha^+-x)}}{2\pi \theta x (x +t(\lambda-1))} \mathbf{1}_{[\alpha^-,\alpha^+]}(x)\mathrm{d} x. \end{align*} $$
Next, we compute the moments of
$\mu _{t,\theta ,\lambda }$
:
To obtain
$m_n(\mu _{t,\theta ,\lambda })$
, we first compute its n-th free cumulant
$\kappa _n(\mu _{t,\theta ,\lambda })$
, which is defined as the coefficient of
$z^n$
in the power series expansion of the R-transform
$R_{\mu _{t,\theta ,\lambda }}(z)$
. By the computation in the proof of Proposition 3.2, we have
where
$m_0(\pi _{\lambda ,\theta })=1$
. Comparing the coefficients of
$z^n$
then yields the following result:
$$ \begin{align} \kappa_{n+1}(\mu_{t,\theta,\lambda}) &= t m_n(\pi_{\lambda,\theta}) = \frac{t\theta^{n}}{n} \sum_{k=0}^{n-1} \binom{n}{k}\binom{n}{k+1} \lambda^k, \qquad n\ge1. \end{align} $$
Proposition 3.5 (Moments)
Consider
$t,\theta>0$
and
$\lambda \ge 1$
. Then,
$m_1(\mu _{t,\theta ,\lambda })=t$
and for
$n\ge 2$
,
$$ \begin{align*} m_n(\mu_{t,\theta,\lambda})=\sum_{m=1}^n \sum_{\substack{r_1,\dots, r_n\ge 0\\ r_1+\cdots+ r_n=m \\ r_1+ 2 r_2 + \cdots + nr_n=n}}P_m^{(n)}(r_1,\dots, r_n)t^m \theta^{n-m} \prod_{s=1}^{n-1} \left( \frac{1}{s}\sum_{k=0}^{s-1} \binom{s}{k}\binom{s}{k+1}\lambda^{k}\right)^{r_{s+1}}, \end{align*} $$
where
Proof One can observe that
$m_1(\mu _{t,\theta ,\lambda })=\kappa _1(\mu _{t,\theta ,\lambda })=t$
by (3.5). Let us consider
${n\ge 2}$
. It is known that the number of non-crossing partitions with
$r_1$
blocks of size
$1$
,
$r_2$
blocks of size
$2$
,
$\dots $
,
$r_n$
blocks of size n equals
$P_m^{(n)}(r_1,\dots , r_n)$
, where
$r_1+r_2+\cdots + r_n=m$
. By the moment-cumulant formula (cf. [Reference Nica and SpeicherNS06, Proposition 11.4]) and (3.6), for
$n\ge 2$
, we obtain
$$ \begin{align*} &m_n(\mu_{t,\theta,\lambda}) = \sum_{\pi \in \mathcal{NC}(n)} \prod_{V\in \pi} \kappa_{|V|}(\mu_{t,\theta,\lambda})\\ &\quad=\sum_{m=1}^n\sum_{\substack{r_1,\dots, r_n\ge 0\\ r_1+\cdots+ r_n=m \\ r_1+ 2 r_2 + \cdots + nr_n=n}}P_m^{(n)}(r_1,\dots, r_n)\kappa_1(\mu_{t,\theta,\lambda})^{r_1}\kappa_2(\mu_{t,\theta,\lambda})^{r_2}\dots \kappa_n(\mu_{t,\theta,\lambda})^{r_n}\\ &\quad=\sum_{m=1}^n \sum_{\substack{r_1,\dots, r_n\ge 0\\ r_1+\cdots+ r_n=m \\ r_1+ 2 r_2 + \cdots + nr_n=n}}P_m^{(n)}(r_1,\dots, r_n) t^m \theta^{n-m} \prod_{s=1}^{n-1} \left( \frac{1}{s}\sum_{k=0}^{s-1} \binom{s}{k}\binom{s}{k+1}\lambda^{k}\right)^{r_{s+1}}. \end{align*} $$
Example 3.6 Consider
$t, \theta>0$
and
$\lambda \ge 1$
. Let us set
$m_n:=m_n(\mu _{t,\theta ,\lambda })$
for short. By Theorem 3.5, we raise the first four moments:
-
•
$m_1=t$
; -
•
$m_2=t^2+\theta t$
; -
•
$m_3 =t^3+3\theta t^2 + \theta ^2(1+\lambda ) t$
; -
•
$m_4=t^4 + 6\theta t^3 + 2\theta ^2(3+2\lambda ) t^2 + \theta ^3(1+3\lambda +\lambda ^2) t$
.
In Section 4.2, we will notice that the measure
$\mu _{t,\theta ,\lambda }$
coincides with a certain scaled free beta prime distribution introduced by [Reference YoshidaYos20] for
$t,\theta>0$
and
$\lambda>1$
. According to [Reference YoshidaYos20, Theorem 6.1], another combinatorial representation of
$m_n(\mu _{t,\theta ,\lambda })$
will be obtained (see Corollary 4.4 later).
3.3 Free self-decomposability and unimodality
One can easily see free self-decomposability for the measure
$\mu _{t,\theta ,\lambda }$
.
Proposition 3.7 (Free self-decomposability)
Let us consider
$t,\theta>0$
and
$\lambda \ge 1$
. The measure
$\mu _{t,\theta ,\lambda }$
is freely self-decomposable if and only if
$\lambda =1$
.
Proof If
$\lambda =1$
, then the function
$tk_{\lambda ,\theta } (x)$
is non-increasing on
$(0,\infty )$
, and therefore the measure
$\mu _{t,\theta ,1}$
is freely self-decomposable for all
$t,\theta>0$
. For
$\lambda>1$
, the function
$k_{\lambda ,\theta }(x)$
is supported on
$(a^-,a^+)$
and
$a^->0$
. Hence,
$\mu _{t,\theta ,\lambda }$
is not freely self-decomposable for any
$t>0$
and
$\theta>0$
.
A probability measure
$\mu $
on
$\mathbb {R}$
is said to be unimodal if there exist
$a\in \mathbb {R}$
and a density function f, which is nondecreasing on
$(-\infty , a)$
and nonincreasing on
$(a,\infty )$
, such that
According to [Reference Hasebe and ThorbjørnsenHT16, Theorem 1], every freely self-decomposable distribution is unimodal. Hence,
$\mu _{t,\theta ,1}$
is unimodal for any
$t,\theta>0$
by Proposition 3.7. For given
$t,\theta>0$
, we investigate the values of
$\lambda $
for which
$\mu _{t,\theta ,\lambda }$
remains unimodal.
Proposition 3.8 (Unimodality)
For given
$t,\theta>0$
, the measure
$\mu _{t,\theta ,\lambda }$
is unimodal if and only if
$1\le \lambda \le 1+t/\theta $
.
Proof If
$1\le \lambda < 1+t/\theta $
, we get
$\mu _{t,\theta ,\lambda }(\mathrm {d}x) = \frac {t}{2\pi \theta } f(x)dx$
by (3.3), where
$$ \begin{align*}f(x)=\frac{\sqrt{(x-\alpha^-)(\alpha^+-x)}}{x(x+t(\lambda-1))}, \qquad x\in (\alpha^-,\alpha^+), \end{align*} $$
and
$\alpha ^\pm $
is defined by (3.2). By elementary calculus, we obtain
$$ \begin{align*}f'(x)=\frac{k(x)}{2x^2(x+t(\lambda-1))^2\sqrt{(x-\alpha^-)(\alpha^+-x)}}, \end{align*} $$
where
We show that there exists a unique solution
$x \in (\alpha ^-,\alpha ^+)$
of the equation
$k(x)=0$
. Since
$$ \begin{align*} k(\alpha^-)&=\{\alpha^- +t (\lambda-1) \}\alpha^-(\alpha^+-\alpha^-)>0,\\ k(\alpha^+)&=\{\alpha^+ +t (\lambda-1) \}\alpha^+(\alpha^--\alpha^+)<0, \end{align*} $$
it follows from the intermediate value theorem that there exists at least one solution
$x\in (\alpha ^-,\alpha ^+)$
to the equation
$k(x)=0$
. Next, we establish the uniqueness of solutions to
$k(x)=0$
. To this end, we show that the function
$k(x)$
is monotone on the interval
$(\alpha ^-, \alpha ^+)$
. Since
$$ \begin{align*} k'(x) &= 6x^2-6(\alpha^++\alpha^-) x + \{4\alpha^+\alpha^- - t(\lambda-1)(\alpha^++\alpha^-)\}\\ &= 6\left(x-\frac{\alpha^++\alpha^-}{2}\right)^2 -\frac{3}{2}(\alpha^++\alpha^-)^2 + 4\alpha^+\alpha^- -t(\lambda-1)(\alpha^++\alpha^-)\\ &\le 6\left(\alpha^+-\frac{\alpha^++\alpha^-}{2}\right)^2 -\frac{3}{2}(\alpha^++\alpha^-)^2 + 4\alpha^+\alpha^- -t(\lambda-1)(\alpha^++\alpha^-)\\ &=-2\alpha^+\alpha^- - t(\lambda-1)(\alpha^++\alpha^-)<0, \end{align*} $$
the function
$k(x)$
is strictly decreasing on
$(\alpha ^-,\alpha ^+)$
. Hence, the equation
$k(x)=0$
has a unique solution in
$(\alpha ^-,\alpha ^+)$
, denoted by
$x_0\in (\alpha ^-,\alpha ^+)$
. Consequently, the function
$f(x)$
is strictly increasing on
$(\alpha ^-,x_0)$
and strictly decreasing on
$(x_0,\alpha ^+)$
, implying that
$\mu _{t,\theta ,\lambda }$
is unimodal with mode
$x_0$
.
If
$\lambda = 1+t/\theta $
, then
$\alpha ^-=0$
and
$\alpha ^+=4(\theta +t)$
. In this case, one can verify that
$f'(x)<0$
for all
$x\in (0,\alpha ^+)$
. Hence,
$\mu _{t,\theta ,1+t/\theta }$
is unimodal with mode
$0$
.
If
$\lambda> 1+t/\theta $
, then
$\mu _{t,\theta ,\lambda }$
has an atom at
$0$
by (3.4). However, the absolutely continuous part of
$\mu _{t,\theta ,\lambda }$
possesses a mode
$x_0$
, as shown above. Therefore, in this case,
$\mu _{t,\theta ,\lambda }$
is not unimodal.
Remark 3.9 According to the proof of Proposition 3.8, if
$1\le \lambda <1+t/\theta $
, then the density function of
$\mu _{t,\theta ,\lambda }$
is bounded by
$f(x_0)$
. In contrast, the density function of
$\mu _{t,\theta ,1+t/\theta }$
is unbounded since
$f(x)\to \infty $
as
$x\to 0^+$
.
3.4 Background driving free Lévy process
Let
$\mu $
be a freely self-decomposable distribution on
$\mathbb {R}$
. By [Reference Barndorff-Nielsen and ThorbjørnsenBNT06, Theorem 6.5], there exists a free Lévy processFootnote
6
$\{Z_t\}_{t\ge 0}$
affiliated with some
$W^\ast $
-probability space such that
and the free Lévy measure
$\nu $
of the law
$\mathcal {L}(Z_1)$
satisfies
where
$\int _0^\infty e^{-t} \mathrm {d}Z_t$
is the free stochastic integral with respect to
$\{Z_t\}_{t\ge 0}$
(see [Reference Barndorff-Nielsen and ThorbjørnsenBNT06, Section 6] and [Reference Maejima and SakumaMS23] for details). The free Lévy process
$\{Z_t\}_{t\ge 0}$
is called the background driving free Lévy process of
$\mu $
.
By Proposition 3.7, the measure
$\mu _{1,\theta ,1}$
is freely self-decomposable. From the construction above, we can then consider the background driving free Lévy process
$\{Z_t\}_{t\ge 0}$
of
$\mu _{1,\theta ,1}$
.
Lemma 3.10 Let
$\{Z_t\}_{t\ge 0}$
be the free Lévy process above. Then,
-
(1) For
$t\ge 0$
, we have
$$ \begin{align*}R_{Z_t}(z) = \frac{tz}{\sqrt{1-4\theta z}}, \qquad z\in \mathbb{C}^-. \end{align*} $$
-
(2) The free Lévy measure of the law of
$Z_t$
is given by
$$ \begin{align*}\frac{t}{\pi x\sqrt{x(4\theta -x)}}\mathbf{1}_{(0,4\theta)}(x)\mathrm{d}x. \end{align*} $$
Proof (1) Recall that
By [Reference Maejima and SakumaMS23, Theorem 6.7], we have
(2) We can further compute the R-transform of
$Z_t$
as follows:
$$ \begin{align*} R_{Z_t}(z) &= \frac{tz}{\sqrt{1-4\theta z}}=-\frac{t}{2\sqrt{\theta}} \frac{-z}{\sqrt{-z+\frac{1}{4\theta}}}\\ &=-\frac{t}{2\sqrt{\theta}} \int_{\frac{1}{4\theta}}^\infty \frac{-z}{-z+x} \cdot \frac{1}{\pi \sqrt{x-\frac{1}{4\theta}}}\mathrm{d}x = \int_0^{4\theta} \kern-1.5pt\left( \frac{1}{1-zx}-1\right) \frac{t}{\pi x \sqrt{x(4\theta -x)}}\mathrm{d}x, \end{align*} $$
where the third equality follows from [Reference Schilling, Song and VondracekSSV12, p. 304] or [Reference Maejima and SakumaMS23, Example 7.2]. Consequently, the Lévy measure of
$Z_t$
is
$\frac {t}{\pi x \sqrt {x(4\theta -x)}} \mathbf {1}_{(0,4\theta )}(x)\mathrm {d}x$
.
Remark 3.11 In [Reference Maejima and SakumaMS23, Example 7.2], the R-transform and the free Lévy measure of the Meixner-type free gamma distribution
$\eta _{t,\theta }=\mu _{t\theta ,\theta ,1}$
were already investigated. In fact, the above lemma can be regarded as a generalization of [Reference Maejima and SakumaMS23, Example 7.2].
The regularity properties of the law of
$Z_t$
can be analyzed as follows.
Corollary 3.12 For any
$t>0$
, the law
$\mathcal {L}(Z_t)$
is absolutely continuous with respect to Lebesgue measure with continuous density on
$\mathbb {R}$
.
Proof Let
$\nu _t$
be the free Lévy measure of
$\mathcal {L}(Z_t)$
. Due to Lemma 3.10(2), we have
According to [Reference Hasebe and SakumaHS17, Theorem 3.4], the measure
$\mathcal {L}(Z_t)$
is absolutely continuous with respect to Lebesgue measure with continuous density on
$\mathbb {R}$
.
Further, we can obtain the n-th free cumulant of the law of
$Z_t$
.
Proposition 3.13 For
$n\ge 1$
, we have
$$ \begin{align*}\kappa_1(Z_t)= t \qquad \text{and} \qquad \kappa_n(Z_t) = t(2\theta)^{n-1} \frac{(2n-3)!!}{(n-1)!} \quad \text{for} \quad n\ge 2. \end{align*} $$
Proof By Lemma 3.10(2), for z small enough (more strictly,
$|z|<1/4\theta $
), we have
$$ \begin{align*} R_{Z_t}(z) &= z \int_0^{4\theta} \frac{1}{1-zx} \frac{t}{\pi\sqrt{x(4\theta-x)}}\mathrm{d}x\\ &=z \int_0^1 \frac{1}{1-4\theta z u} \cdot \frac{t}{\pi\sqrt{u(1-u)}}\mathrm{d}u \qquad {(x=4\theta u)}\\ &=\frac{tz}{\pi} \int_0^1 u^{-\frac{1}{2}} (1-u)^{-\frac{1}{2}}(1-4\theta zu)^{-1}\mathrm{d}u\\ &=\frac{tz}{\pi} \frac{\Gamma(\frac{1}{2})^2}{\Gamma(1)} {}_2 F_1\left(1,\frac{1}{2}; 1; 4\theta z \right) \qquad \text{(by Euler integral representation)}\\ &=tz \sum_{n=0}^\infty \frac{(1)^{(n)}(\frac{1}{2})^{(n)}}{(1)^{(n)} n!} (4\theta z)^n \qquad {((x)^{(n)}:=x(x+1)\dots (x+n-1))}\\ &=tz + \sum_{n=2}^\infty t (2\theta)^{n-1}\frac{(2n-3)!!}{(n-1)!} z^n. \end{align*} $$
Comparing the coefficients of
$z^n$
then yields the desired result.
3.5 Correlation of a free gamma process
In noncommutative setting, we can consider covariance and correlation as follows. Let
$(\mathcal {A},\varphi )$
be a
$C^\ast $
-probability space and
$x,y\in \mathcal {A}$
. Then, their covariance is defined by
It is easy to see that
$\text {Cov}(x,y)=0$
if
$x,y$
are free. Next, their correlation can be defined by
$$ \begin{align*}\text{Corr}(x,y) := \frac{\text{Cov}(x,y)}{\sqrt{\kappa_2(x)}\sqrt{\kappa_2(y)}}, \end{align*} $$
when
$x,y$
have nonzero second free cumulant (variance). In general, we note that
$\text {Corr}(x,y) \neq \text {Corr}(y,x)$
since
$xy\neq yx$
.
Let
$\{X_t\}_{t\ge 0}$
be a stochastic process in a
$C^\ast $
-probability space. The process
$\{X_t\}_{t\ge 0}$
is called a Meixner-type free gamma process if it is a free Lévy process whose marginal distribution at time
$1$
is
$\mu _{1,\theta ,\lambda }$
for some
$\theta>0$
and
$\lambda \ge 1$
. By the definition of free Lévy processes, it follows that
$X_t\sim \mu _{t,\theta ,\lambda }$
for
$t>0$
. Below, we compute the correlation of a free gamma process.
Proposition 3.14 (Correlation)
Let
$\{X_t\}_{t\ge 0}$
be a Meixner-type free gamma process in a
$C^\ast $
-probability space
$(\mathcal {A},\varphi )$
. For any
$s,t>0$
, we have
Proof For
$s<t$
, we have
$$ \begin{align*} \text{Cov}(X_s,X_t) &= \varphi(X_s X_t) - \varphi(X_s)\varphi(X_t)\\ &= \varphi(X_s (X_t-X_s) + X_s^2) -\varphi(X_s)\varphi(X_t)\\ &= \varphi(X_s (X_t-X_s)) + \varphi(X_s^2)- \varphi(X_s)\varphi(X_t). \end{align*} $$
By Theorem 3.5 (or Example 3.6), we have
$\varphi (X_s)=s$
and
$\varphi (X_s^2)=s^2+\theta s$
. Since
$\{X_t\}_{t\ge 0}$
is a free Lévy process,
$X_s$
and
$X_t-X_s$
are free, and hence
$\varphi (X_s(X_t-X_s))=\varphi (X_s) \varphi (X_t-X_s)$
. Moreover, by the definition of a free Lévy process,
$X_t-X_s \overset {\mathrm {d}}{=} X_{t-s}$
. Finally, we get
By (3.5) (or Example 3.6 again), we obtain
$\kappa _2(X_s)=\kappa _2(\mu _{s,\theta ,\lambda })=\theta s$
. Hence,
Recalling the computation of their covariance, we observe that
$\text {Cov}(X_t,X_s)=\text {Cov}(X_s,X_t)$
even if
$s<t$
. Consequently, it also follows that
$\text {Corr}(X_t,X_s)=\text {Corr}(X_s,X_t)$
.
If
$s=t$
, then
and therefore
$\text {Corr}(X_s,X_s)=1$
.
In classical probability, it is known that the correlation of the gamma process
$\{G_t\}_{t\ge 0}$
is
For the reason, Proposition 3.14 is entirely analogous to the above classical result.
4 Free convolution formula and free beta prime distributions
4.1 S-transform
In this section, we compute the S-transform of
$\mu _{t,\theta ,\lambda }$
.
Lemma 4.1 For
$t,\theta>0$
and
$\lambda \ge 1$
, we have
Proof A straightforward computation together with (2.3) and (3.1) shows that the compositional inverse of
$R_{\mu _{t,\theta ,\lambda }} $
is
$$ \begin{align*}R_{\mu_{t,\theta,\lambda}}^{\langle -1 \rangle} (z) = \frac{z(\theta z-t)}{\theta t (1-\lambda)z -t^2}, \qquad z\in (-1+ \mu_{t,\theta,\lambda}(\{0\}),0), \end{align*} $$
which in turn implies that
$$ \begin{align*} S_{\mu_{t,\theta,\lambda}} (z) = \frac{R_{\mu_{t,\theta,\lambda}}^{\langle -1 \rangle} (z)}{z}= \frac{t-\theta z}{t(t+\theta (\lambda-1)z)}.\\[-42pt] \end{align*} $$
In particular, we show that the measure
$\mu _{t,\theta ,1}$
is the inverse of Marchenko–Pastur distribution.
Lemma 4.2 For
$t,\theta>0$
, we get
$\mu _{t,\theta ,1} = \left (\pi _{1+\frac {t}{\theta },\frac {\theta }{t^2}} \right )^{\langle -1\rangle }$
.
Proof The desired formula follows from the S-transform of
$(\pi _{1+t/\theta , \theta /t^2})^{\langle -1\rangle }$
. Actually, from Example 2.2 and Lemma 4.1, we have
4.2 Free convolution formula
Using the S-transform, we can analyze the effect of the third parameter
$\lambda $
on the measure
$\mu _{t,\theta ,\lambda }$
as follows.
Theorem 4.3 (Free convolution formula for
$\mu _{t,\theta ,\lambda }$
)
Consider
$t,\theta>0$
. If
$\lambda>1$
, then
where
$q = \frac {t}{\theta (\lambda -1)}$
. In particular,
$\mu _{t,\theta ,1+t/\theta }= \mu _{t,\theta ,1} \boxtimes \pi _{1,1}$
. Hence, the measure
$\mu _{t,\theta ,1+t/\theta }$
belongs to the class of free compound Poisson distributions.
Proof By Examples 2.1 and 2.2 and Lemma 4.1, we have
$$ \begin{align*} S_{ D_{t(\lambda-1)}\left(\pi_{q,1} \boxtimes (\pi_{1+\frac{t}{\theta},1})^{\langle -1\rangle} \right)}(z) &=\frac{1}{t(\lambda-1)} \frac{1}{\frac{t}{\theta(\lambda-1)}+z} \left(\frac{t}{\theta}-z\right)\\ &=\frac{t-\theta z}{t(t+\theta(\lambda-1)z)}\\ &=S_{\mu_{t,\theta,\lambda}}(z). \end{align*} $$
Thus, equation (4.1) holds. Since
$\pi _{\lambda ,\theta }=D_\theta (\pi _{\lambda ,1})$
, we obtain
$$ \begin{align*} \mu_{t,\theta,\lambda} &=D_{t(\lambda-1)}\left(\pi_{\frac{t}{\theta(\lambda-1)},1} \boxtimes (\pi_{1+\frac{t}{\theta},1})^{\langle -1\rangle} \right) \\ &=D_{t(\lambda-1)} \circ D_{\frac{t}{\theta(\lambda-1)}} \left( \pi_{\frac{t}{\theta(\lambda-1)},\frac{\theta(\lambda-1)}{t}} \boxtimes (\pi_{1+\frac{t}{\theta},1})^{\langle -1\rangle} \right)\\ &=\pi_{q,q^{-1}}\boxtimes (\pi_{1+\frac{t}{\theta},\frac{\theta}{t^2}})^{\langle-1\rangle}\\ &=\pi_{q,q^{-1}}\boxtimes \mu_{t,\theta,1}, \end{align*} $$
where the last equality follows from Lemma 4.2.
According to Theorem 4.3, for any
$t,\theta>0$
and
$\lambda>1$
, the measure
$\mu _{t,\theta ,\lambda }$
coincides with
$$ \begin{align} D_{t(\lambda-1)}\left(f\beta' \left(\frac{t}{\theta(\lambda-1)}, 1+\frac{t}{\theta} \right)\right), \end{align} $$
where
is the free beta prime distribution, introduced by Yoshida [Reference YoshidaYos20, Section 3.4]. Thus, by using [Reference YoshidaYos20, Theorem 6.1], we obtain a combinatorial formula for the moments of
$\mu _{t,\theta ,\lambda }$
.
Corollary 4.4 For
$t,\theta>0$
and
$\lambda>1$
, the n-th moment of
$\mu _{t,\theta ,\lambda }$
is given by
$$ \begin{align*}m_n(\mu_{t,\theta,\lambda}) = \theta^n \sum_{\pi \in \mathcal{NCL}(n)}\lambda^{|\pi|-\text{sg}(\pi)} \left(\frac{t}{\theta}\right)^{|\pi|-\text{dc}(\pi)}, \end{align*} $$
where
$\mathcal {NCL}(n)$
is the set of all non-crossing linked partitions of
$\{1,\dots ,n\}$
(see [Reference DykemaDyk07]),
$\mathrm {sg}(\pi )$
is the number of singletons in
$\pi $
, and
$\mathrm {dc}(\pi )$
is the number of doubly covered elements by
$\pi $
(see [Reference YoshidaYos20, Definition 5.7]).
4.3 Free beta prime distributions
In the previous section, we observed that
$\mu _{t,\theta ,\lambda }$
is a suitably scaled free beta prime distribution when
$\lambda>1$
. Using the fact, we now investigate the free beta prime distribution
$f\beta '(a,b)$
for any
$a>0$
and
$b>1$
. A straightforward computation of the S-transform yields the following formula.
Proposition 4.5 For any
$a>0$
and
$b>1$
, we have
Proof It is easy to see that
To determine
$t,\theta>0$
and
$\lambda>1$
from
$a>0$
and
$b>1$
, we compare the following equation:
where the RHS is the S-transform of some
$\mu _{t,\theta ,\lambda }$
. Equivalently,
Thus, one can see that
$t=\frac {a}{b-1}$
,
$\theta = \frac {a}{(b-1)^2}$
, and
$\lambda =\frac {a+b-1}{a}$
, as desired.
From Proposition 4.5, we can identify analytic properties of the free beta prime distribution that were not discussed in [Reference YoshidaYos20].
Corollary 4.6 Let us consider
$a>0$
and
$b>1$
as follows. Then,
-
(1) The free Lévy measure of
$f\beta '(a,b)$
is given by where
$$ \begin{align*}\frac{a}{b-1} \frac{k_{A,B}(x)}{x}\mathrm{d} x, \end{align*} $$
$A=\frac {a+b-1}{a}$
and
$B=\frac {a}{(b-1)^2}$
. Recall that
$k_{A,B}(x)$
is the density function of the Marchenko–Pastur distribution
$\pi _{A,B}$
.
-
(2)
$f\beta '(a,b)$
is not freely self-decomposable. -
(3)
$f\beta '(a,b)$
is unimodal if and only if
$a\ge 1$
.
Proof The free Lévy measure of
$f\beta '(a,b)$
follows directly from the definition of
$\mu _{t,\theta ,\lambda }$
and Proposition 4.5. Since
$\frac {a+b-1}{a}>1$
, Proposition 3.7 together with Proposition 4.5 implies that
$f\beta '(a,b)= \mu _{\frac {a}{b-1}, \frac {a}{(b-1)^2}, \frac {a+b-1}{a}}$
is not freely self-decomposable. Moreover, by Propositions 3.8 and 4.5, the measure
$f\beta '(a,b)$
is unimodal if and only if
$$ \begin{align*}1 < \frac{a+b-1}{a} \le 1+ \frac{\frac{a}{b-1}}{\frac{a}{(b-1)^2}} = b, \end{align*} $$
which is equivalent to
$a\ge 1$
.
5 Potential correspondence
Let
$t,\theta>0$
and
$\lambda \ge 1$
as follows. We consider the following potential function on
$\mathbb {R}_{>0}$
:
$$ \begin{align*} V_{t,\theta,\lambda} (x):= \begin{cases} \left(2+\dfrac{t}{\theta}\right) \log x + \dfrac{t^2}{\theta x}, & \lambda=1,\\ \left(1-\dfrac{t}{\theta(\lambda-1)} \right) \log x + \left( 1 + \dfrac{t\lambda}{\theta(\lambda-1)}\right) \log (x+t(\lambda-1)), & \lambda>1. \end{cases} \end{align*} $$
One can verify that
where
$\alpha _{\pm }$
was defined in (3.2). First, we analyze the Gibbs measure associated with
$V_{t,\theta ,\lambda }$
in order to investigate analogous properties for
$\mu _{t,\theta ,\lambda }$
. The ultimate goal of this section is to determine whether
$\mu _{t,\theta ,\lambda }$
is the equilibrium measure of the free entropy associated with
$V_{t,\theta ,\lambda }$
.
5.1 Gibbs measure associated with potential
We study the Gibbs measure associated with potential
$V_{t,\theta ,\lambda }$
:
where
$\mathcal {Z}_{t,\theta ,\lambda }$
is the normalized constant (i.e., the partition function). We first present an explicit formula for the partition function
$\mathcal {Z}_{t,\theta ,\lambda }$
as follows.
Lemma 5.1 Let us consider
$t,\theta>0$
and
$\lambda \ge 1$
. Then,
$$ \begin{align*} \mathcal{Z}_{t,\theta,\lambda}= \begin{cases} \left(\dfrac{t^2}{\theta}\right)^{-1-\frac{t}{\theta}} \Gamma\left(1+\dfrac{t}{\theta}\right), & \lambda=1\\ (t(\lambda+1))^{-\frac{t}{\theta}-1} B\left(\dfrac{t}{\theta(\lambda-1)}, 1+\dfrac{t}{\theta}\right), & \lambda>1. \end{cases} \end{align*} $$
Proof A simple computation leads to the desired results. Actually, if
$\lambda =1$
, then
$$ \begin{align*} {\mathcal{Z}}_{t,\theta,\lambda} &= \int_0^\infty x^{-(2+\frac{t}{\theta})} e^{-\frac{t^2}{\theta x}}\mathrm{d}x\\ &= \left(\frac{t^2}{\theta}\right)^{-1-\frac{t}{\theta}} \int_0^\infty u^{\frac{t}{\theta}}e^{-u}\mathrm{d}u \qquad \text{(by putting } u= t^2/(\theta x))\\ &=\left(\frac{t^2}{\theta}\right)^{-1-\frac{t}{\theta}} \Gamma \left(1+\frac{t}{\theta}\right). \end{align*} $$
If
$\lambda>1$
, then
$$ \begin{align*} {\mathcal{Z}}_{t,\theta,\lambda} &= \int_0^\infty x^{-1 +\frac{t}{\theta(\lambda-1)}} \left(x +t(\lambda-1)\right)^{-1-\frac{t\lambda}{\theta(\lambda-1)}}\mathrm{d}x\\ &=\int_0^\infty u^{\frac{t}{\theta}} \left(1+t(\lambda-1)u\right)^{-1- \frac{t\lambda}{\theta(\lambda-1)}} \mathrm{d}u \qquad {(x=1/u)}\\ &=\left(t(\lambda+1)\right)^{-1-\frac{t}{\theta}} \int_0^\infty v^{\frac{t}{\theta}} (1+v)^{-1- \frac{t\lambda}{\theta(\lambda-1)}} \mathrm{d}v \qquad {(v=t(\lambda-1)u)}\\ &=\left(t(\lambda+1)\right)^{-1-\frac{t}{\theta}} B\left(1+\frac{t}{\theta}, \frac{t}{\theta(\lambda-1)}\right). \end{align*} $$
By symmetry of beta functions, we obtain the desired result.
By Lemma 5.1, the Gibbs measure
$\rho _{t,\theta ,\lambda }$
has the following form:
$$ \begin{align} \rho_{t,\theta,1} (\mathrm{d} x) &=\frac{(\frac{t^2}{\theta})^{1+\frac{t}{\theta}}}{\Gamma(1+\frac{t}{\theta})} x^{-(2+ \frac{t}{\theta})} e^{-\frac{t^2}{\theta x}} \mathrm{d}x, \end{align} $$
$$ \begin{align} \rho_{t,\theta,\lambda}(\mathrm{d} x) &=\frac{(t(\lambda-1))^{\frac{t}{\theta}+1}}{B(\frac{t}{\theta(\lambda-1)},1+\frac{t}{\theta})} x^{-1 + \frac{t}{\theta(\lambda-1)}} \left(x+t(\lambda-1)\right)^{-1-\frac{t\lambda}{\theta(\lambda-1)}}\mathrm{d}x, \quad \lambda>1. \end{align} $$
From the above representation (5.1), the measure
$\rho _{t,\theta ,1}$
is the inverse gamma distribution. More precisely,
where
$(\gamma _{a,b})^{\langle -1\rangle }$
is defined by
It is known that the class
$\{(\gamma _{a,b})^{\langle -1\rangle }:a,b>0\}$
includes the positive
$1/2$
-classical stable law
$\sqrt {\frac {c}{2\pi }} x^{-\frac {3}{2}} e^{-\frac {c}{2x}}\mathrm {d}x$
,
$c>0$
(it is also called Lévy distribution), but the measure
$\rho _{t,\theta ,1}$
cannot be the positive
$1/2$
-classical stable law for any
$t,\theta>0$
.
For
$\lambda>1$
, the formula (5.2) implies that
$$ \begin{align} \rho_{t,\theta,\lambda} &= D_{t(\lambda-1)}\left( \beta' \left(\frac{t}{\theta(\lambda-1)}, 1+\frac{t}{\theta} \right)\right) \end{align} $$
where
$\beta '(a,b)$
is the beta prime distribution, that is,
Equation (5.5) is obtained by replacing
$\boxtimes $
,
$\pi _{a,1,}$
and
$\mu _{t,\theta ,\lambda }$
in equation (4.1) of Theorem 4.3 with
$\circledast $
,
$\gamma _{a,1,}$
and
$\rho _{t,\theta ,\lambda }$
, respectively. Using
$\gamma _{a,b} = D_b(\gamma _{a,1})$
for
$a,b>0$
, and equations (5.3) and (5.5), we get
This formula can be interpreted as the classical counterpart of equation (4.2). In particular, if we put
$\lambda =1+t/\theta $
, then
The measure
$\rho _{t,\theta , 1+t/\theta }$
belongs to the ME (mixture of exponential distributions), that is, the distribution of the form
$EZ$
, where
$E,Z$
are independent random variables such that E is distributed as some exponential distribution and
$Z\ge 0$
(see [Reference GoldieGol67, Reference SteutelSte67] for details). According to [Reference BondessonBon92, Reference Ismail and KelkerIK79], the inverse gamma distributions and the beta prime distributions are classically self-decomposable. Thus, for any
$t,\theta>0$
and
$\lambda \ge 1$
, the measure
$\rho _{t,\theta ,\lambda }$
is self-decomposable.
Summarizing the discussion so far, we arrive at the following theorem.
Theorem 5.2 (Convolution formula for
$\rho _{t,\theta ,\lambda }$
)
Let us consider
$t,\theta>0$
,
$\lambda> 1$
and
${q=\frac {t}{\theta (\lambda -1)}}$
. Then, we obtain
and
$\rho _{t,\theta ,\lambda }$
is self-decomposable. In particular,
$\rho _{t,\theta , 1+t/\theta }= \rho _{t,\theta ,1} \circledast \gamma _{1,1}$
, and hence it belongs to the ME.
Remark 5.3 Let
$p_{t,\theta , \lambda }$
be the density function of the measure
$\rho _{t,\theta ,\lambda }$
. We obtain the following ordinary differential equations:
$$ \begin{align*} \frac{p_{t,\theta,1}'(x)}{p_{t,\theta,1}(x)}+ \frac{x-\frac{t^2}{2\theta+t}}{\frac{\theta}{2\theta+t}x^2}=0 \end{align*} $$
and
$$ \begin{align*} \frac{p_{t,\theta,\lambda}'(x)}{p_{t,\theta,\lambda}(x)} + \frac{(x+ \frac{t(\lambda-1)}{2}) - \frac{t^2(\lambda+1)}{2(2\theta+t)}}{\frac{\theta}{2\theta+t} (x+ \frac{t(\lambda-1)}{2})^2 - \frac{t^2\theta(\lambda-1)^2}{4(2\theta+t)}} = 0, \qquad \lambda>1. \end{align*} $$
Thus, we see that the family
$\{\rho _{t,\theta ,\lambda }: t,\theta>0, \lambda \ge 1\} \subset {\mathcal {P}}(\mathbb {R}_{\ge 0})$
coincides with a subfamily of Pearson distributions (see [Reference PearsonP1895, p. 381]).
5.2 Equilibrium measure of free entropy
In this section, we investigate the maximizer of free entropy associated with the potential
$V_{t,\theta ,\lambda }$
.
Theorem 5.4 (Maximizer of free entropy)
For each
$t,\theta>0$
and
$1 \le \lambda <1+t/\theta $
, the measure
$\mu _{t,\theta ,\lambda }$
is a unique maximizer of
among all
$\mu \in {\mathcal {P}}(\mathbb {R}_{>0})$
.
Proof Thanks to the theory of the energy problem in [Reference JohanssonJoh98], the existence and uniqueness for the maximizer
$\mu _V$
of
$\Sigma _{V_{t,\theta ,\lambda }}$
are guaranteed. The rest of the proof is to conclude that
$\mu _V=\mu _{t,\theta ,\lambda }$
. Since
$V_{t,\theta ,\lambda }$
is regular enough (see [Reference Féral, Donati-Martin, Émery, Rouault and StrickerFer08, Section 4.2] and [Reference FèralFer06, Section 2]),
$\mu _V$
has the density
$\Phi _V=\frac {\mathrm {d}\mu _V}{\mathrm {d} x}$
and is compactly supported on
$\mathbb {R}_{>0}$
, denoted by
$[a,b]$
for some
$0<a<b$
. We divide two cases for
$\lambda $
as follows.
Case of
$\boldsymbol {\lambda =1}$
: By definition of
$V_{t,\theta ,1}$
, we can apply [Reference FèralFer06, Theorem 1] in the case of
$\lambda =-1-t/\theta $
,
$\alpha =0,$
and
$\beta =t^2/\theta $
to the points
$a,b$
and the density
$\Phi _V$
. Then,
$0<a<b$
satisfy
Thus, we have
where
$\alpha ^\pm $
is defined in (3.2) for
$\lambda =1$
. Moreover,
$$ \begin{align*} \Phi_V(x) &=\frac{1}{2\pi} \sqrt{(x-a)(b-x)} \cdot \frac{\beta}{\sqrt{ab}x^2} \mathbf{1}_{[a,b]}(x)\\ &= \frac{t\sqrt{(x-\alpha^-)(\alpha^+-x)} }{2\pi \theta x^2} \mathbf{1}_{[\alpha^-,\alpha^+]}(x) = \frac{\mathrm{d}\mu_{t,\theta,1}}{\mathrm{d} x}(x) , \end{align*} $$
as desired.
Case of
$\boldsymbol {1<\lambda < 1+t/\theta }$
: By [Reference Saff and TotikST97, Theorem IV. 1.11], the points a and b satisfy the following singular integral equations:
$$ \begin{align} \frac{1}{\pi} \int_a^b \frac{V_{t,\theta,\lambda}'(x)}{\sqrt{(b-x)(x-a)}} \mathrm{d} x=0 \quad \text{and} \quad \frac{1}{\pi} \int_a^b \frac{xV_{t,\theta,\lambda}'(x)}{\sqrt{(b-x)(x-a)}} \mathrm{d} x=2. \end{align} $$
By using the formulas
$$ \begin{align*}\int_a^b \frac{1}{x \sqrt{(b-x)(x-a)} }\mathrm{d} x = \frac{\pi}{\sqrt{ab}}\quad \text{and} \quad \int_a^b \frac{1}{\sqrt{(b-x)(x-a)} }\mathrm{d} x =\pi, \end{align*} $$
we obtain
$$ \begin{align*} \frac{1}{\pi} &\int_a^b \frac{V_{t,\theta,\lambda}'(x)}{\sqrt{(b-x)(x-a)}} \mathrm{d} x \\ &= \frac{1}{\pi} \int_a^b \frac{1-\frac{t}{\theta(\lambda-1)}}{x \sqrt{(b-x)(x-a)}}\mathrm{d} x + \frac{1}{\pi} \int_a^b \frac{1+\frac{t\lambda}{\theta(\lambda-1)}}{(x+t(\lambda-1) )\sqrt{(b-x)(x-a)}}\mathrm{d} x \\ &= \left(1-\frac{t}{\theta(\lambda-1)} \right) \frac{1}{\sqrt{ab}} + \left(1+\frac{t\lambda}{\theta(\lambda-1)}\right) \frac{1}{\sqrt{(a+t(\lambda-1))(b+t(\lambda-1))}}. \end{align*} $$
Moreover, we have
$$ \begin{align*} \frac{1}{\pi} & \int_a^b \frac{xV_{t,\theta,\lambda}'(x)}{\sqrt{(b-x)(x-a)}} \mathrm{d} x\\ &= \frac{1}{\pi} \int_a^b \frac{1-\frac{t}{\theta(\lambda-1)}}{\sqrt{(b-x)(x-a)}}\mathrm{d} x + \frac{1}{\pi} \int_a^b \frac{\left(1+\frac{t\lambda}{\theta(\lambda-1)}\right)x}{(x+t(\lambda-1) )\sqrt{(b-x)(x-a)}}\mathrm{d} x \\ &= \left(1-\frac{t}{\theta(\lambda-1)}\right) +\left(1+\frac{t\lambda}{\theta(\lambda-1)}\right)\left\{1- \frac{t(\lambda-1)}{\sqrt{(a+t(\lambda-1))(b+t(\lambda-1))}}\right\}\\ &=2+\frac{t}{\theta} -\left(1+\frac{t\lambda}{\theta(\lambda-1)}\right)\frac{t(\lambda-1)}{\sqrt{(a+t(\lambda-1))(b+t(\lambda-1))}}. \end{align*} $$
Therefore, the equations (5.6) imply that
$$ \begin{align} \begin{cases} \left(1+\dfrac{t\lambda}{\theta(\lambda-1)}\right) \dfrac{1}{\sqrt{(a+t(\lambda-1))(b+t(\lambda-1))}} = \left(\dfrac{t}{\theta(\lambda-1)} -1\right) \dfrac{1}{\sqrt{ab}} \\ \left(1+\dfrac{t\lambda}{\theta(\lambda-1)}\right)\dfrac{1}{\sqrt{(a+t(\lambda-1))(b+t(\lambda-1))}} = \dfrac{1}{\theta(\lambda-1)}. \end{cases} \end{align} $$
By solving the above equations, we have
This implies that
$a,b$
are the solution of
$(z-\alpha ^+)(z-\alpha ^-)=0$
, and hence
$a= \alpha ^-$
and
$b=\alpha ^+$
.
By using the formula
$$ \begin{align*}\text{p.v}\left(\frac{1}{\pi} \int_a^b \frac{1}{u\sqrt{(u-a)(b-u)}} \frac{{\mathrm{d} u}}{u-x} \right) =-\frac{1}{\sqrt{ab}x}, \end{align*} $$
we get
$$ \begin{align*} \text{p.v.}& \left(\frac{1}{\pi}\int_{\alpha^-}^{\alpha^+} \frac{V_{t,\theta,\lambda}'(u)}{\sqrt{(u-\alpha^-)(\alpha^+-u)}} \frac{\mathrm{ d} u}{u-x}\right)\\ &=\left(1-\frac{t}{\theta(\lambda-1)}\right)\text{p.v.} \left(\frac{1}{\pi}\int_{\alpha^-}^{\alpha^+} \frac{1}{u\sqrt{(u-\alpha^-)(\alpha^+-u)}} \frac{\mathrm{d} u}{u-x}\right)\\ &\hspace{6mm}+\left(1+\frac{t\lambda}{\theta(\lambda-1)}\right) \text{p.v.} \left(\frac{1}{\pi}\int_{\alpha^-}^{\alpha^+} \frac{1}{(u+t(\lambda-1))\sqrt{(u-\alpha^-)(\alpha^+-u)}} \frac{\mathrm{d} u}{u-x}\right)\\ &=-\left(1-\frac{t}{\theta(\lambda-1)}\right)\frac{1}{\sqrt{\alpha^+\alpha^-} x}\\ &\hspace{6mm} -\left(1+\frac{t\lambda}{\theta(\lambda-1)}\right) \frac{1}{\sqrt{(\alpha^-+t(\lambda-1))(\alpha^++t(\lambda-1))}}\cdot \frac{1}{x+t(\lambda-1)}\\ &=\frac{1}{\theta(\lambda-1)x} -\frac{1}{\theta(\lambda-1)}\cdot\frac{1}{x+t(\lambda-1)} \qquad \text{(by }({5.7})\text{ and }({5.8}))\\ &=\frac{t}{\theta x (x+t(\lambda-1))}. \end{align*} $$
From [Reference Saff and TotikST97, Theorem IV. 3.1], we finally obtain
$$ \begin{align*} \Phi_V(x)&=\frac{1}{2\pi} \sqrt{(x-\alpha^-)(\alpha^+ -x)} \times \text{p.v.} \left( \frac{1}{\pi} \int_{\alpha^-}^{\alpha^+} \frac{V_{t,\theta,\lambda}'(u)}{\sqrt{(u-\alpha^-)(\alpha^+-u)}} \frac{\mathrm{d} u}{u-x}\right)\\ &= \frac{t\sqrt{(x-\alpha^-)(\alpha^+ -x)} }{2\pi \theta x (x+t(\lambda-1))} = \frac{\mathrm{d} \mu_{t,\theta,\lambda}}{\mathrm{d} x}(x), \end{align*} $$
as desired.
Remark 5.5 From the proof of the above theorem, we observe that the class
$\{\mu _{t,\theta ,1}:t,\theta>0\}$
forms a special subclass of the free GIG distributions (see [Reference FèralFer06, Reference Hasebe and SzpojankowskiHS19]).
Due to Theorem 5.4, the potential correspondence maps the measure
$\rho _{t,\theta ,\lambda }$
to the measure
$\mu _{t,\theta ,\lambda }$
for all
$t,\theta>0$
and
$1\le \lambda <1+t/\theta $
. In particular, the potential correspondence maps the beta prime distributions
$\beta '(a,b)$
to the free ones
$f\beta '(a,b)$
for all
$a,b>1$
, due to Proposition 4.5 and (5.4). Consequently, we obtain the following result for free beta prime distributions.
Corollary 5.6 Let us consider
$a,b>1$
. The measure
$f\beta '(a,b)$
is the unique maximizer of the free entropy
$\Sigma _{V_{a,b}}(\mu )$
among probability measures
$\mu $
on
$\mathbb {R}_{>0}$
, where
6 Meixner-type free beta–gamma algebra
In classical probability, there are many algebraic relations between gamma and beta random variables, so called beta–gamma algebra (see, e.g., [Reference Ferreira and SimonFS23]). A purpose of this section is to study algebraic relations between free beta and free gamma random variables in the sense of Meixner-type.
According to Section 1, if
$G_1^{(p)} \sim \eta _{p,1}$
and
$G_2^{(q)}\sim \eta _{q,1}$
are free, then
for some
$G_3^{(p+q)}\sim \eta _{p+q,1}$
. We call a positive operator
$G\sim \eta _{p,1} (p>0)$
a Meixner-type free gamma random variable. Recall that
$\eta _{p,1}=\mu _{p,1,1}$
for all
$p>0$
.
First, we investigate the reversed measure of
$\mu _{t,\theta ,\lambda }$
as follows. Since
$\mu _{t,\theta ,\lambda }(\{0\})=0$
for
$1\le \lambda \le 1+t/\theta $
, we can define the reversed measure
$(\mu _{t,\theta ,\lambda })^{\langle -1\rangle }$
in this case. We have already obtained the measure
$(\mu _{t,\theta ,1})^{\langle -1\rangle }$
by Lemma 4.2.
Lemma 6.1 For
$t,\theta>0$
and
$1< \lambda \le 1+\frac {t}{\theta }$
, we have
where
$$ \begin{align*} t' = \frac{(\theta+t)(\lambda-1)}{t-\theta(\lambda-1)}, \quad \theta' = \frac{\theta(\theta+t)(\lambda-1)^2}{ (t-\theta(\lambda-1))^2} \quad \text{and} \quad \lambda'=\frac{t\lambda}{(\theta+t)(\lambda-1)}\ge1. \end{align*} $$
Proof By Theorem 4.3 and Proposition 4.5, we obtain
$$ \begin{align*} (\mu_{t,\theta,\lambda})^{\langle-1\rangle} &= D_{(t(\lambda-1))^{-1}} \left( (\pi_{\frac{t}{\theta(\lambda-1)},1})^{\langle-1\rangle} \boxtimes \pi_{1+\frac{t}{\theta},1}\right) \\ &=D_{(t(\lambda-1))^{-1}}\left(f\beta' \left( 1+\frac{t}{\theta}, \frac{t}{\theta(\lambda-1)} \right)\right)=D_{(t(\lambda-1))^{-1}} (\mu_{t',\theta',\lambda'}). \end{align*} $$
The free infinite divisibility for the reversed measure of
$\mu _{t,\theta ,\lambda }$
follows from Section 3 and Lemmas 4.2 and 6.1.
Corollary 6.2 Let us consider
$t,\theta>0$
and
$1\le \lambda \le 1+t/\theta $
. Then, the following properties hold:
-
(1)
$(\mu _{t,\theta ,\lambda })^{\langle -1\rangle }$
is freely infinitely divisible. -
(2)
$(\mu _{t,\theta , \lambda })^{\langle -1 \rangle }$
is freely self-decomposable if and only if
$\lambda =1+t/\theta $
. -
(3)
$(\mu _{t,\theta ,\lambda })^{\langle -1\rangle }$
is unimodal.
We recall that, if
$\Gamma _p \sim \gamma _{p,1}$
and
$\Gamma _q \sim \gamma _{q,1}$
are classically independent, then
$\frac {\Gamma _p}{\Gamma _q}$
is distributed as beta prime distribution
$\beta '(p,q)= \frac {x^{p-1} (1+x)^{-p-q}}{B(p,q)} \mathrm {d} x$
for
$p,q>0$
(see [Reference Balakrishnan, Johnson and KotzBJK95, Chapter 27]). By analogy, we investigate free beta-prime random variables in the sense of Meixner-type, that is, the product of
$G_1^{(p)}$
and
$(G_2^{(q)})^{-1}$
, where
$G_1^{(p)}\sim \eta _{p,1}$
and
$G_2^{(q)}\sim \eta _{q,1}$
are free.
Proposition 6.3 Given
$p,q>0$
, we assume that
$G_1^{(p)}\sim \eta _{p,1}$
and
$G_2^{(q)}\sim \eta _{q,1}$
are free in a
$C^\ast $
-probability space
$(\mathcal {A},\varphi )$
. Then,
Proof Since
$\sigma (G_2^{(q)}) = [2+q-2\sqrt {q+1}, 2+q+2\sqrt {q+1}]$
and
$f(x)=1/x$
is a bounded continuous function on
$\sigma (G_2^{(q)})$
, two random variables
$G_1^{(p)}$
and
$(G_2^{(q)})^{-1}=f(G_2^{(q)}) \in \mathcal {A}$
are also free by the Stone–Weierstrass theorem. We observe
$$ \begin{align*} \eta_{p,1}\boxtimes (\eta_{q,1})^{\langle-1 \rangle} &= \mu_{p,1,1} \boxtimes \pi_{1+q,q^{-2}} \qquad \text{(by Lemma }{4.2})\\ &=D_{\frac{1+q}{q^2}} \left(\mu_{p,1,1} \boxtimes \pi_{1+q,\frac{1}{1+q}} \right)\\ &=D_{\frac{1+q}{q^2}} \left(\mu_{p,1,1+\frac{p}{1+q} }\right) \qquad \text{(by Theorem }{4.3}).\\[-40pt] \end{align*} $$
Finally, we investigate the sum of freely independent and identically distributed reciprocal Meixner-type free gamma random variable.
Theorem 6.4 Given
$p>0$
and
$n\in \mathbb {N}$
, let us consider freely independent random variables
$G_1^{(p)},G_2^{(p)}, \dots , G_{2^n}^{(p)} \sim \eta _{p,1}$
in some
$C^*$
-probability space
$(\mathcal {A},\varphi )$
. Then,
$$ \begin{align*}\left(\frac{1}{G_1^{(p)}} + \frac{1}{G_2^{(p)}}+\cdots + \frac{1}{G_{2^n}^{(p)}} \right)^{-1} \overset{\mathrm{d}}{=} \left(2^n+\frac{2^n-1}{p}\right)^{-2}G^{(2^np+2^n-1)}, \end{align*} $$
for some
$G^{(2^np+2^n-1)} \sim \eta _{2^np+2^n-1,1}$
.
Proof By an argument similar to that in Proposition 6.3,
$(G_1^{(p)})^{-1}, (G_2^{(p)})^{-1}, \dots , (G_{2^n}^{(p)})^{-1}\in \mathcal {A}$
are also free. Denote by
$\tau _p:=(\eta _{p,1})^{\langle -1\rangle } = \pi _{1+p,p^{-2}}$
for
$p>0$
. Then, the distribution of the LHS coincides with
$(\tau _p^{\boxplus 2^n})^{\langle -1\rangle }$
. Finally, we should prove that
For
$n=1$
, we get
We assume that (6.1) holds true in the case when
$n=k$
. Then,
$$ \begin{align*} \tau_p^{\boxplus 2^{k+1}} &= (\tau_p^{\boxplus 2^k})^{\boxplus 2} = D_{(2^k + \frac{2^k-1}{p})^2} (\tau_{2^kp+2^k-1})^{\boxplus 2}\\ &=D_{(2^k + \frac{2^k-1}{p})^2} D_{(2+ \frac{1}{2^k p +2^k-1})^2} (\tau_{1+2(2^kp +2^k-1)})\\ &=D_{(2^{k+1} + \frac{2^{k+1}-1}{p})^2} (\tau_{2^{k+1}p + 2^{k+1}-1}). \end{align*} $$
By induction, we obtain the desired formula (6.1).
Corollary 6.5 Given
$m \ge 2$
, we consider free copies
$\{ G_1^{((2(m-1))^{-1})}, G_2^{((2(m-1))^{-1})}\}$
from
$\eta _{(2(m-1))^{-1},1}$
and free copies
$\{G_1^{((m-1)^{-1})}, \dots , G_m^{((m-1)^{-1})}\}$
from
$\eta _{(m-1)^{-1},1}$
. Then,
$$ \begin{align*}\left( \frac{1}{G_1^{((2(m-1))^{-1})}} + \frac{1}{G_2^{((2(m-1))^{-1})}} \right)^{-1} \overset{\mathrm{d}}{=} \frac{1}{4m^2} (G_1^{((m-1)^{-1})}+ \cdots + G_m^{((m-1)^{-1})}). \end{align*} $$
Proof By putting
$p=\frac {1}{2(m-1)}$
and
$n=1$
in Proposition 6.4, we get
$$ \begin{align*} \left( \frac{1}{G_1^{(p)}} + \frac{1}{G_2^{(p)}} \right)^{-1} &\sim D_{\frac{p^2}{(2p+1)^2}} \eta_{2p+1,1}\\ &= D_{\frac{1}{4}\cdot \frac{(2p)^2}{(2p+1)^2}} \eta_{2p,1}^{\boxplus \frac{2p+1}{2p}} =D_{\frac{1}{4m^2}} \eta_{\frac{1}{m-1}, 1}^{\boxplus m}, \end{align*} $$
as desired.
In classical probability theory, it is known that, if
$\Gamma _p \sim \gamma _{p,1}$
and
$\Gamma _q \sim \gamma _{q,1}$
are independent, then the random variable
$$ \begin{align*}\left(1+\frac{\Gamma_p}{\Gamma_q} \right)^{-1} = \frac{\Gamma_p^{-1}}{\Gamma_p^{-1}+\Gamma_q^{-1}} \end{align*} $$
is distributed as a beta distribution
$\beta (p,q)$
for
$p,q>0$
. Below, we study free beta random variables in the sense of Meixner-type.
Theorem 6.6 Given
$p>0$
, let us set free random variables
$G_1^{(p)}, G_2^{(p)} \sim \eta _{p,1}$
in
$C^\ast $
-probability space
$(\mathcal {A},\varphi )$
. Define
and
$\mu _p := \mathcal {L}(B^{(p)})$
. Then, the following assertions hold:
-
(1) Its S-transform is given by
for z in a neighborhood of
$$ \begin{align*}S_{\mu_p}(z)=\frac{p^4}{(1+p+z)(2p+1-z)} \end{align*} $$
$(-1,0)$
.
-
(2) Its R-transform is given by
$$ \begin{align*}R_{\mu_p}(z)= \frac{p(z-p^3)-\sqrt{(3p+2)^2z^2-2p^5z+p^8}}{2z}, \qquad z\in \left(-\frac{p^3}{2(p+1)}, 0\right). \end{align*} $$
-
(3) The measure
$\mu _p$
is not freely infinitely divisible for any
$p>0$
.
Proof The existence of the measure
$\mu _p$
follows from Riesz–Markov–Kakutani’s theorem. Note that
$(G_1^{(p)})^{-1},(G_2^{(p)})^{-1} \sim \pi _{1+p,p^{-2}}$
are free in
$(\mathcal {A},\varphi )$
. Hence,
$(G_1^{(p)})^{-1}+(G_2^{(p)})^{-1}$
and
$B^{(p)}$
are also free in
$(\mathcal {A},\varphi )$
by free Lukacs property (see [Reference SzpojankowskiSzp15]). Since
we have
by Theorem 6.4. Therefore, we get
$$ \begin{align*} S_{\mu_p}(z) &= \frac{S_{\pi_{1+p,p^{-2}}}(z)}{S_{D_{p^2(2p+1)^{-2}} (\eta_{2p+1,1})}(z)}\\ &=\frac{p^2}{1+p+z} \cdot \frac{p^2}{2p+1-z}=\frac{p^4}{(1+p+z)(2p+1-z)}. \end{align*} $$
Next, since
$z\mapsto z S_{\mu _p}(z)$
is strictly increasing on
$(-1,0)$
for any
$p>0$
, we have
$R_{\mu _p}^{\langle -1\rangle }(z)= z S_{\mu _p}(z)$
. Hence, we can compute the R-transform of
$\mu _p$
by using the relation.
The complex equation
$(3p+2)^2z^2 -2p^5 z+p^8=0$
has distinct roots
$$ \begin{align*}p^4 \cdot \frac{p-2 \sqrt{(p+1)(2p+1)} \ i}{(3p+2)^2} \quad \text{and} \quad p^4 \cdot \frac{p+2 \sqrt{(p+1)(2p+1)}\ i}{(3p+2)^2}, \end{align*} $$
where
$i=\sqrt {-1}$
. This means that
$\sqrt {(3p+2)^2z^2 -2p^5 z+p^8}$
has a pole at
$z\in \mathbb {C}^-$
. Hence,
$R_{\mu _p}$
does not have an analytic continuation to
$\mathbb {C}^-$
. The measure
$\mu _p$
is not freely infinitely divisible.
7 Asymptotic roots of polynomials related to free gamma distributions
In this section, we briefly discuss the connection between orthogonal (Jacobi/Bessel) polynomials and the measure
$\mu _{t,\theta ,\lambda }$
via finite free probability, as developed in [Reference MarcusMar21, Reference Marcus, Spielman and SrivastavaMSS22]. We begin by introducing the notation that will be used throughout this section.
-
• For a polynomial p of degree d, we denote by
$\widetilde {e}_k^{(d)}(p)$
the normalized k-th elementary symmetric polynomial in the d roots
$\lambda _1(p),\dots , \lambda _d(p)$
of p. Explicitly,
$$ \begin{align*}\widetilde{e}_k^{(d)}(p) : = \binom{d}{k}^{-1} \sum_{1\le i_1< \cdots < i_k \le d} \lambda_{i_1}(p)\dots \lambda_{i_k}(p), \quad k=1,\dots, d. \end{align*} $$
Then, we can represent any polynomials p of degree d as
$$ \begin{align*}p(x) = \prod_{i=1}^d (x-\lambda_i(p))=\sum_{k=0}^d (-1)^k \binom{d}{k} \widetilde{e}_k^{(d)}(p) x^{d-k}. \end{align*} $$
-
• (Dilation) For
$c\neq 0$
and a polynomial p of degree d, we define
$$ \begin{align*}(D_c(p))(x):= c^d p\left(\frac{x}{c}\right). \end{align*} $$
Then, one can see that
$\widetilde {e}_k^{(d)}(D_c(p))= c^k\ \widetilde {e}_k^{(d)}(p)$
for
$k=1,\dots , d$
. -
• (Empirical root measure) For a polynomial p of degree d, we define the probability measure
$$ \begin{align*}\mathfrak{m}[[ p]] :=\frac{1}{d}\sum_{p(x)=0} \delta_x. \end{align*} $$
The measure is called the empirical root measure of p. If p is real-rooted (resp., positive real-rooted), then
$\mathfrak {m}[[p]] \in {\mathcal {P}}(\mathbb {R})$
(resp.,
$\mathfrak {m}[[p]] \in {\mathcal {P}}(\mathbb {R}_{>0})$
).
7.1 Jacobi polynomial
For
$a \in \mathbb {R}\setminus \{i/d: i=1,\dots , d-1\}$
and
$b\in \mathbb {R}$
, we denote by
$J^{(a,b)}_d$
the monic polynomial of degree d, in which its coefficient is given by
$$ \begin{align*}\widetilde{e}_k^{(d)}\left(J^{(a,b)}_d \right) := \frac{(bd)_k}{(ad)_k} \qquad \text{for} \ k=0, 1,\dots,d, \end{align*} $$
where
$(x)_n:=x(x-1)\dots (x-n+1)$
and
$(x)_0:=1$
. The polynomials
$J^{(a,b)}_d$
are well-known as the Jacobi polynomials. According to [Reference Martínez-Finkelshtein, Morales and PeralesMFMP24b, (80)], the polynomial
$J^{(a,b)}_d$
can be represented by a hypergeometric function as follows:
$$ \begin{align*}J_d^{(a,b)}(x)=(-1)^d\frac{ (bd)_d}{(ad)_d} {}_2F_1 (-d, ad-d+1; bd-d+1; x). \end{align*} $$
It is known that the polynomial is orthogonal with respect to the weight function
when
$-bd+d-1 \notin \mathbb {Z}_{\ge 0}$
(see, e.g., [Reference Dominici, Johnston and JordaanDJJ13, Theorem 1]). Recently, the class of hypergeometric polynomials (including
$J^{(a,b)}_d$
) was studied in the framework of finite free probability theory (see, e.g., [Reference Martínez-Finkelshtein, Morales and PeralesMFMP24a, Reference Martínez-Finkelshtein, Morales and PeralesMFMP24b, Reference Arizmendi, Fujie, Perales and UedaAFPU24]).
In particular, we define
By [Reference Martínez-Finkelshtein, Morales and PeralesMFMP24b, Section 5.2] and [Reference Dominici, Johnston and JordaanDJJ13, Proposition 4 and Theorem 5], the polynomial
$\widehat {J}^{(a,b)}_d$
has d distinct roots in which are all nonnegative when
$a<0$
and
$b>1$
. Furthermore, we define the monic real-rooted polynomial of degree d by
for
$t,\theta>0$
and
$1<\lambda \le 1+t/\theta $
. In this case, it is easy to check
$A<0$
and
$B>1$
, and therefore
$p_d^{(t,\theta ,\lambda )}$
also has d distinct roots in which are all nonnegative. Moreover,
$p_d^{(t,\theta ,\lambda )}$
is orthogonal with respect to the weight function
$$ \begin{align} W_d^{(A,B)}\left(-\frac{x}{t(\lambda-1)}\right) \propto x^{d(B-1)}(x+t(\lambda-1))^{d(A-B-1)}. \end{align} $$
According to recent work [Reference Arizmendi, Fujie, Perales and UedaAFPU24], the ratio of consecutive coefficients of a polynomial plays a role in the S-transform in the framework of finite free probability. In what follows, we apply the results of [Reference Arizmendi, Fujie, Perales and UedaAFPU24] to the sequence of polynomials
$(p_d^{(t,\theta ,\lambda )})_{d\in \mathbb {N}}$
. A direct computation shows that
$$ \begin{align*}\frac{\widetilde{e}_{k-1}^{(d)}(p_d^{(t,\theta,\lambda)})}{\widetilde{e}_{k}^{(d)}(p_d^{(t,\theta,\lambda)})} = \frac{1}{t(\lambda-1)} \frac{\frac{t}{\theta}+\frac{k-1}{d}}{\frac{t}{\theta(\lambda-1)}-\frac{k-2}{d}}. \end{align*} $$
As
$d\to \infty $
with
$k/d\to z\in (0, 1)$
, the limiting value of the above consecutive coefficient ratio can be computed as follows:
$$ \begin{align*} \frac{\widetilde{e}_{k-1}^{(d)}(p_d^{(t,\theta,\lambda)})}{\widetilde{e}_{k}^{(d)}(p_d^{(t,\theta,\lambda)})} &\to \frac{1}{t(\lambda-1)} \frac{\frac{t}{\theta}+z}{\frac{t}{\theta(\lambda-1)}-z} \\ &= \frac{1}{t(\lambda-1)} \frac{(1+\frac{t}{\theta}) -1 - (-z)}{\frac{t}{\theta(\lambda-1)} +(-z)} \\ &= S_{D_{t(\lambda-1)} (f\beta'(\frac{t}{\theta(\lambda-1)}, 1+\frac{t}{\theta}))}(-z) \qquad \text{(by }({2.2})\text{ and }({4.4}))\\ &= S_{\mu_{t,\theta,\lambda}}(-z) \qquad \text{(by }{4.3}). \end{align*} $$
By [Reference Arizmendi, Fujie, Perales and UedaAFPU24, Theorem 1.1], the limiting behavior of the empirical root measure of
$ p_d^{(t,\theta ,\lambda )}$
can be described as follows.
Proposition 7.1 For
$t,\theta>0$
and
$1<\lambda \le 1+t/\theta $
, we have
7.2 Bessel polynomial
For
$a\in \mathbb {R} \setminus \{i/d: i=1, \dots , d-1\}$
, we define
$B^{(a)}_d$
as the monic polynomial of degree d, whose coefficients are given by
$$ \begin{align*}\widetilde{e}_k^{(d)}\left(B^{(a)}_d \right) = \frac{d^k}{(a d)_k} \qquad \text{for} \ k=0,1,\dots, d. \end{align*} $$
The polynomials
$B^{(a)}_d$
are well-known as the Bessel polynomials. According to [Reference Martínez-Finkelshtein, Morales and PeralesMFMP24b, Equation (80)], the polynomial
$B^{(a)}_d$
can be represented by a hypergeometric polynomial as follows:
$$ \begin{align*}B^{(a)}_d(x) = \frac{(-1)^d}{(ad)_d} {}_2 F_0 (-d, a d-d+1; - \; x). \end{align*} $$
We define
According to [Reference Martínez-Finkelshtein, Morales and PeralesMFMP24b],
$\widehat {B}_d^{(a)}$
has d distinct roots in which are all nonnegative whenever
$a<0$
. Furthermore, we define the monic real-rooted polynomial of degree d by
Since
$-t/\theta <0$
, the polynomial
$p_d^{(t,\theta ,1)}$
also has d distinct roots in which are all nonnegative. Then, we obtain
$$ \begin{align*} \frac{\widetilde{e}_{k-1}^{(d)}(p_d^{(t,\theta,1)})}{\widetilde{e}_{k}^{(d)}(p_d^{(t,\theta,1)})} = \frac{1}{t^2}\left(t +\theta\cdot \frac{k-1}{d}\right) \to \frac{1}{t^2} (t+\theta z) = S_{\mu_{t,\theta,1}}(-z) \end{align*} $$
as
$k/d\to z \in (0,1)$
. According to [Reference Arizmendi, Fujie, Perales and UedaAFPU24], we obtain the following result.
Proposition 7.2 For
$t,\theta>0$
, we get
7.3 Finite free version of convolution formula
In this section, we construct the finite analog of Theorem 4.3. To begin, we introduce a finite version of the free multiplicative convolution. For monic polynomials
$p,q$
of degree d, their finite free multiplicative convolution
$p\boxtimes _d q$
is defined as the monic polynomial whose coefficients satisfy
It is known that if
$p,q$
are nonnegative real rooted, then so is
$p\boxtimes _d q$
. For the relationship between
$\boxtimes $
and
$\boxtimes _d$
, see [Reference Arizmendi, Garza-Vargas and PeralesAGP23, Reference Marcus, Spielman and SrivastavaMSS22].
Given
$\lambda>0$
and
$\theta>0$
, define the Laguerre polynomial
$L_d^{(\lambda ,\theta )}$
to be the monic polynomial whose coefficients are
The polynomial
$L_d^{(\lambda ,\theta )}$
has d distinct, strictly positive roots whenever
$\lambda>0$
. Moreover, it is known that
$\mathfrak {m}[[L_d^{(\lambda ,\theta )}]]\xrightarrow {w} \pi _{\lambda ,\theta }$
as
$d\to \infty $
.
Combining the above observations, we arrive at the following formula.
Proposition 7.3 For
$t, \theta>0$
,
$\lambda>1,$
and
$d\in \mathbb {N}$
, we have
where
$q= \frac {t}{\theta (\lambda -1)}$
.
Proof We compare the k-th coefficients of
$p_d^{(t,\theta ,\lambda )}$
and
$ p_d^{(t,\theta ,1)} \boxtimes _d L_d^{(q+ 1/d,\ q^{-1})}$
. First, we have
$$ \begin{align*} \widetilde{e}_k^{(d)}(p_d^{(t,\theta,\lambda)}) &= t^k (\lambda-1)^k (-1)^{d-k}\ \widetilde{e}_k^{(d)} (J_d^{(-t/\theta,\ q+ 1/d)})\\ &= t^k (\lambda-1)^k (-1)^{d-k}\ \frac{(qd+1)_k}{(-\frac{t}{\theta}d)_k}. \end{align*} $$
On the other hand, we get
$$ \begin{align*} \widetilde{e}_k^{(d)}( p_d^{(t,\theta,1)} \boxtimes_d L_d^{(q+ 1/d,\ q^{-1})}) &=\widetilde{e}_k^{(d)}(p_d^{(t,\theta,1)})\widetilde{e}_k^{(d)}(L_d^{(q+ 1/d,\ q^{-1})})\\ &= \frac{t^{2k}}{\theta^k} (-1)^{d-k} \frac{d^k}{(-\frac{t}{\theta}d)_k} \times \frac{\theta^k(\lambda-1)^k}{t^k} \frac{(qd+1)_k}{d^k}\\ &= t^k (\lambda-1)^k (-1)^{d-k}\ \frac{(qd+1)_k}{(-\frac{t}{\theta}d)_k}, \end{align*} $$
as desired.
7.4 Comments
These results (Propositions 7.1 and 7.2) are not essentially new since Martínez-Finkelshtein et al. [Reference Martínez-Finkelshtein, Morales and PeralesMFMP24a, Reference Martínez-Finkelshtein, Morales and PeralesMFMP24b] and Arizmendi et al. [Reference Arizmendi, Fujie, Perales and UedaAFPU24] have already studied the relationship between the asymptotic behavior of the empirical root measures of Jacobi or Bessel polynomials and free probability. Nevertheless, we would like to emphasize that our analysis provides deeper insights into the behavior of the roots of Jacobi or Bessel polynomials, thanks to the understanding gained from the distributional properties of the Meixner-type free gamma distribution and its connection with free entropy. For instance, the weight function (7.1) of
$p_d^{(t,\theta ,\lambda )}$
belongs to the Pearson class, which may be connected to the Gibbs measure
$\rho _{t,\theta ,\lambda }$
associated with the potential
$V_{t,\theta ,\lambda }$
discussed in Section 5.
Acknowledgements
The authors would like to thank Takahiro Hasebe (Hokkaido University) for fruitful discussions in relation to this project. The authors wish to express their sincere gratitude to the anonymous referee for carefully reading the manuscript and providing numerous valuable comments and suggestions, which have substantially improved the article. In particular, we are grateful for the referee’s observations regarding the relation between the measures
$\mu _{t,\theta ,\lambda }$
and the centered free Meixner distributions (Proposition 3.2), its connection with Pearson distributions (Remark 5.3), and the orthogonality of
$p_d^{(t,\theta ,\lambda )}$
, which have given us deeper insights than originally anticipated. We would like to express our heartfelt thanks once again.








