Hostname: page-component-68c7f8b79f-r8tb2 Total loading time: 0 Render date: 2025-12-18T13:10:23.057Z Has data issue: false hasContentIssue false

Generalized Meixner-type free gamma distributions: Convolution formulas and potential correspondence

Published online by Cambridge University Press:  17 November 2025

Noriyoshi Sakuma*
Affiliation:
Department of Mathematics, Graduate School of Science, the University of Osaka , Japan
Yuki Ueda
Affiliation:
Department of Mathematics, Hokkaido University of Education , Japan e-mail: ueda.yuki@a.hokkyodai.ac.jp
Rights & Permissions [Opens in a new window]

Abstract

We introduce and study a class of generalized Meixner-type free gamma distributions $\mu _{t,\theta ,\lambda }$ ($t,\theta>0$ and $\lambda \ge 1$), which includes both the free gamma distributions introduced by Anshelevich and certain scaled free beta prime distributions introduced by Yoshida. We investigate fundamental properties and mixture structures of these distributions. In particular, we consider the Gibbs distribution $\frac {1}{\mathcal {Z}_{t,\theta ,\lambda }} \exp \{-V_{t,\theta ,\lambda }(x)\}$ associated with a family of potentials $V_{t,\theta ,\lambda }$, and show that $\mu _{t,\theta ,\lambda }$ maximizes Voiculescu’s free entropy with potential $V_{t,\theta ,\lambda }$ for parameters $t,\theta>0$ and $1\le \lambda <1+t/\theta $. This result substantially extends the range of classical-free correspondences obtained the potential function, differing from those arising from the Bercovici–Pata bijection. Moreover, we identify algebraic relations involving noncommutative random variables distributed as free gamma distributions.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Canadian Mathematical Society

1 Introduction

In free probability theory, the Bercovici–Pata bijection, which connects infinitely divisible laws with free counterparts, is often discussed in relation to its correspondence with classical probability [Reference Barndorff-Nielsen and ThorbjørnsenBNT06]. A recurring issue in this context is the inadequate correspondence of the gamma distribution, which has hindered a comprehensive understanding of its free counterpart. The image of the class of classical gamma distributions under the Bercovici–Pata bijection was introduced by Pérez-Abreu and Sakuma [Reference Pérez-Abreu and SakumaPAS08], but its distributional properties are not well understood (see, e.g., [Reference Haagerup and ThorbjørnsenHT14]).

Meanwhile, an alternative approach was developed independently, focusing on orthogonal polynomials to investigate the properties of the “free” gamma distribution [Reference AnshelevichAns03, Reference Bożejko and BrycBB06]. However, the “free” gamma distribution remains relatively underexplored, particularly in terms of its interpretation and characterization, leaving significant scope for further investigation. This gap may stem from inadequate parameter consideration and the lack of a proper examination of the distribution family as a whole. In this article, we introduce a new class of probability distributions that extends the framework of the “free” gamma distribution. By refining the parameterization and exploring novel correspondences, we aim to overcome the limitations of prior research and provide deeper insights into the structure and potential applications of these distributions.

Before providing explanations, we introduce the notation used in this article. We denote by ${\mathcal {P}}(K)$ the set of all Borel probability measures on $\mathbb {R}$ whose support is contained in $K\subset \mathbb {R}$ . In this article, we frequently take K to be the real line $\mathbb {R}$ , the nonnegative real line $\mathbb {R}_{\ge 0}:=[0,\infty )$ , or the positive real line $\mathbb {R}_{>0}:=(0,\infty )$ . Moreover, we adopt the following notational conventions.

  • (Dilation) For $\mu \in \mathcal {P}(\mathbb {R})$ and $c \neq 0$ , we define $D_c(\mu )$ as the measure given by

    $$\begin{align*}D_c(\mu)(B) := \mu(\{x/c : x \in B\}), \qquad \text{for all Borel sets } B \subset \mathbb{R}. \end{align*}$$
  • (Power of measure) For $\mu \in \mathcal {P}(\mathbb {R}_{\ge 0})$ and $c> 0$ , we define $\mu ^{\langle c \rangle }$ by

    $$\begin{align*}\mu^{\langle c \rangle}(B) := \mu(\{x^{1/c} : x \in B\}), \qquad \text{for all Borel sets } B \subset \mathbb{R}_{\ge 0}. \end{align*}$$
  • (Reversed measure) For $\mu \in \mathcal {P}(\mathbb {R}_{\ge 0})$ with $\mu (\{0\}) = 0$ , we define $\mu ^{\langle -1 \rangle }$ by

    $$\begin{align*}\mu^{\langle -1 \rangle}(B) := \mu(\{x^{-1} : x \in B\}), \qquad \text{for all Borel sets } B \subset \mathbb{R}_{>0}. \end{align*}$$

Moreover, for $w \in \mathbb {C}\setminus \{0\}$ , we define $\sqrt {w} := |w|^{\frac {1}{2}} e^{i\frac {\arg (w)}{2}}$ with $\arg (w) \in (0,2\pi )$ .

First, we briefly recall the definition of the classical gamma distribution, which is widely used in classical probability theory. The classical gamma distribution is a two-parameter family of probability distributions, defined as

$$ \begin{align*}\gamma_{t, \theta} := \frac{1}{\theta^t \Gamma(t)} x^{t-1} e^{-\frac{x}{\theta}} \mathbf{1}_{(0,\infty)}(x) \, \mathrm{d}x, \qquad t, \theta> 0, \end{align*} $$

where t is the shape parameter and θ is the mean (or scale) parameter. The gamma distribution is known to be infinitely divisible, and its characteristic function can be expressed in terms of the Lévy–Khintchine representation as

$$ \begin{align*}\int_{\mathbb{R}} e^{izx} \gamma_{t, \theta}(\mathrm{d}x) = \exp \left[ \int_{(0,\infty)} \left(e^{izx} - 1\right) \frac{t e^{-\frac{x}{\theta}}}{x} \, \mathrm{d} x \right], \qquad z \in \mathbb{R,} \end{align*} $$

where the Lévy measure is given by

$$ \begin{align*}\nu(\mathrm{d} x) = \frac{t e^{-\frac{x}{\theta}}}{x} \mathbf{1}_{(0,\infty)}(x) \mathrm{d} x. \end{align*} $$

It includes many distributions, such as the exponential distribution, Erlang distribution, and chi-squared distribution. In addition, the following properties hold:

  • $\gamma _{1, \theta }^{\ast t} =\gamma _{t, \theta }$ for all $t,\theta>0$ ;

  • $\gamma _{t_{1}, \theta }\ast \gamma _{t_{2}, \theta } = \gamma _{t_{1}+t_{2},\theta }$ for all $t_1,t_2,\theta>0$ ;

  • $D_{\theta }(\gamma _{t,1})=\gamma _{t,\theta }$ for all $t,\theta>0$ .

The first property motivates the consideration of the gamma Lévy process, where the shape parameter can also be interpreted as a “time parameter” in the context of stochastic processes. The second property is known as the reproductive property of probability distributions. The third property shows that $\theta $ is a scale parameter for gamma distributions. Moreover, the gamma distribution has been extensively characterized in the literature (see, for example, [Reference BondessonBon92, Reference MoschopoulosMos85, Reference SatoSat13]). Thus, the classical gamma distribution has attracted significant interest from many fields, such as probability, mathematical statistics, Bayesian statistics, econometrics, queueing theory, and so on.

Given the richness of its structure, it was a natural step to investigate its analog in free probability theory. In 2003, Anshelevich introduced the free gamma distribution as a subfamily of the free Meixner class (see [Reference AnshelevichAns03, p. 238]). More precisely, the Meixner-type free gamma distribution,Footnote 1 denoted by $\eta _{t,\theta }$ , is defined as the probability measure whose R-transformFootnote 2 is given by

(1.1) $$ \begin{align} R_{\eta_{t,\theta}}(z) = \int_{\mathbb{R}} \left(\frac{1}{1-zx}-1\right) \frac{t\theta k_{1,\theta}(x)}{x}\mathrm{d}x = \frac{t}{2}(1-\sqrt{1-4\theta z}), \quad z\in \mathbb{C}^-, \end{align} $$

where $\lambda>0$ is the time (or shape) parameter, $\theta>0$ is the mean (or scale) parameter, and the function $k_{\lambda ,\theta }$ is defined as

$$ \begin{align*}k_{\lambda,\theta}(x):= \frac{\sqrt{(a^+-x)(x-a^-)}}{2\pi \theta x} \mathbf{1}_{(a^-,a^+)}(x), \qquad \lambda \ge1, \end{align*} $$

with $a^\pm :=\theta (\sqrt {\lambda }\pm 1)^2$ . For all $t,\theta>0$ , the measure $\eta _{t,\theta }$ is freely infinitely divisible, which serves as the analog of infinite divisibility in free probability. Moreover, the following properties hold:

  • $\eta _{1,\theta }^{\boxplus t}=\eta _{t,\theta } $ for all $t,\theta>0$ ;

  • $\eta _{t_{1},\theta } \boxplus \eta _{t_{2},\theta } = \eta _{t_1+t_2,\theta }$ for all $t_1,t_2,\theta>0$ ;

  • $D_\theta (\eta _{t,1})=\eta _{t,\theta }$ for all $t,\theta>0$ ,

where $\boxplus $ denotes the free additive convolution, and $\mu ^{\boxplus t}$ is the free convolution power of $\mu \in {\mathcal {P}}(\mathbb {R})$ . See Section 2.1 or [Reference Bercovici and VoiculescuBV93] for the above concepts in free probability.

In this article, we introduce a generalized family of probability measures that includes the Meixner-type free gamma distributions $\eta _{t,\theta }$ . Specifically, we consider the family $\{\mu _{t, \theta , \lambda } : t, \theta> 0,\ \lambda \ge 1\} \subset {\mathcal {P}}(\mathbb {R}_{\ge 0})$ , where each measure is defined via its R-transform as

$$ \begin{align*}R_{\mu_{t,\theta,\lambda}}(z) = \int_{\mathbb{R}}\left(\frac{1}{1-zx}-1\right) \frac{t k_{\lambda,\theta}(x)}{x}\mathrm{d}x, \quad z\in \mathbb{C}^-. \end{align*} $$

We call the measure $\mu _{t, \theta , \lambda }$ the generalized Meixner-type free gamma distribution. It is straightforward to verify that

$$ \begin{align*}\mu_{t\theta, \theta, 1} = \eta_{t,\theta}, \qquad t,\theta>0, \end{align*} $$

and therefore the family indeed extends the class of Meixner-type free gamma distributions. Moreover, the parameter t admits a natural interpretation as a free convolution power:

$$ \begin{align*}\mu_{t,\theta,\lambda} = \mu_{1,\theta,\lambda}^{\boxplus t}, \qquad t>0. \end{align*} $$

Note that the measure $\mu _{t,\theta ,\lambda }$ coincides with the centered free Meixner distribution, up to a shift (see Section 2.2 and Proposition 3.2). Below, we outline the structure of the article and summarize our main results.

In Section 3, we study various distributional properties of the generalized Meixner-type free gamma distributions, including their density, existence of atoms, and moments (see Section 3.2), as well as free self-decomposability and unimodality (see Section 3.3). Furthermore, we study the free Lévy processes related to the measures $\mu _{t,\theta ,\lambda }$ in Sections 3.4 and 3.5.

In Section 4, we derive formulas involving the free multiplicative convolution $\boxtimes $ .

Theorem 1.1 (Free convolution formula for the measure $\mu _{t,\theta ,\lambda }$ , see Theorem 4.3)

Let $t,\theta>0$ and $\lambda \ge 1$ . Then, the following properties hold:

  1. (1) For $\lambda> 1$ , the measure $\mu _{t,\theta ,\lambda }$ can be expressed in two equivalent forms:

    $$ \begin{align*} \mu_{t,\theta,\lambda} = D_{t(\lambda-1)} \left(\pi_{q,1} \boxtimes (\pi_{1+\frac{t}{\theta},1})^{\langle -1 \rangle}\right) = \mu_{t,\theta,1} \boxtimes \pi_{q,q^{-1}}, \end{align*} $$
    where $q = \frac {t}{\theta (\lambda -1)}$ and $\pi _{\lambda ,\theta }$ is the probability measure defined by
    $$\begin{align*}\pi_{\lambda,\theta}(\mathrm{d}x) = \max\{0, 1-\lambda\}\delta_0 + k_{\lambda,\theta}(x)\,\mathrm{d}x. \end{align*}$$
  2. (2) In particular, $\mu _{t,\theta ,1+t/\theta }= \mu _{t,\theta ,1} \boxtimes \pi _{1,1}$ . Hence, the measure $\mu _{t,\theta ,1+t/\theta }$ belongs to the class of free compound Poisson distributions.

According to Theorem 1.1(1), for $\lambda> 1$ , the measure $\mu _{t, \theta , \lambda }$ coincides with a suitably scaled free beta prime distribution $f\beta '(a, b)$ introduced in [Reference YoshidaYos20]. As a consequence, various properties of the free beta prime distribution can be derived from the results established for $\mu _{t, \theta , \lambda }$ (see Section 4.3 for further details).

In Section 5, for $t,\theta>0$ and $\lambda \ge 1$ , we consider the potential function $V_{t,\theta ,\lambda }$ defined by

$$ \begin{align*} V_{t,\theta,\lambda} (x):= \begin{cases} \left(2+\dfrac{t}{\theta}\right) \log x + \dfrac{t^2}{\theta x}, & \lambda=1,\\ \left(1-\dfrac{t}{\theta(\lambda-1)} \right) \log x + \left( 1 + \dfrac{t\lambda}{\theta(\lambda-1)}\right) \log (x+t(\lambda-1)), & \lambda>1. \end{cases} \end{align*} $$

We first derive the explicit form of the Gibbs measure

$$ \begin{align*}\rho_{t,\theta,\lambda}(\mathrm{d}x) = \frac{1}{\mathcal{Z}_{t,\theta,\lambda}}\exp\{-V_{t,\theta,\lambda}(x)\}\mathrm{ d}x,\end{align*} $$

associated with the potential $V_{t,\theta ,\lambda }$ , where the normalization constant is given by $\mathcal {Z}_{t,\theta ,\lambda }= \int \exp \{- V_{t,\theta ,\lambda }(x) \}\mathrm {d}x$ .

  • For $\lambda =1$ , we obtain

    $$ \begin{align*}\rho_{t,\theta,1}(\mathrm{d}x) = \frac{(\frac{t^2}{\theta})^{1+\frac{t}{\theta}}}{\Gamma(1+\frac{t}{\theta})} x^{-(2+ \frac{t}{\theta})} e^{-\frac{t^2}{\theta x}} \mathrm{d}x.\end{align*} $$
  • For $\lambda>1$ , we get

    $$ \begin{align*}\rho_{t,\theta,\lambda}(\mathrm{d}x)= \frac{(t(\lambda-1))^{1+\frac{t}{\theta}}}{B(\frac{t}{\theta(\lambda-1)},1+\frac{t}{\theta})} x^{-1 + \frac{t}{\theta(\lambda-1)}} \left(x+t(\lambda-1)\right)^{-1-\frac{t\lambda}{\theta(\lambda-1)}}\mathrm{d}x, \end{align*} $$
    where $B(a,b)$ denotes the beta function with parameter $a,b>0$ .

The measure $\rho _{t,\theta ,\lambda }$ coincides with the beta prime distribution when $\lambda> 1$ . This close resemblance between classical and free analogs is striking and highlights deep structural parallels.

Theorem 1.2 (Convolution formula for the Gibbs measure $\rho _{t,\theta ,\lambda }$ , see Theorem 5.2)

Let us consider $t,\theta>0$ and $\lambda \ge 1$ . Then, the following properties hold:

  1. (1) For $\lambda>1$ , we have

    $$ \begin{align*} \rho_{t,\theta, \lambda} &= D_{t(\lambda-1)} \left( \gamma_{q,1} \circledast (\gamma_{1+\frac{t}{\theta},1})^{\langle -1\rangle}\right) =\rho_{t,\theta,1} \circledast \gamma_{q,q^{-1}}, \end{align*} $$
    where $q=\frac {t}{\theta (\lambda -1)}$ and $\circledast $ is the classical multiplicative convolution.
  2. (2) In particular, $\rho _{t,\theta , 1+t/\theta }= \rho _{t,\theta ,1} \circledast \gamma _{1,1}$ . Hence, the measure $\rho _{t,\theta , 1+t/\theta }$ belongs to the class of mixture of exponential distributions.

Next, we show that for all $t, \theta> 0$ and $1 \le \lambda < 1 + t/\theta $ , the measure $\mu _{t,\theta ,\lambda }$ uniquely maximizes the free entropy associated with the potential $V_{t,\theta ,\lambda }$ . In other words, $\mu _{t,\theta ,\lambda }$ serves as the equilibrium measure in this variational framework.

Theorem 1.3 (Free entropy associated with $V_{t,\theta ,\lambda }$ , see Theorem 5.4)

For $t,\theta>0$ and $1\le \lambda <1+ t/\theta $ , we have

$$ \begin{align*}\mu_{t,\theta,\lambda} = \text{argmax}\{ \Sigma_{V_{t,\theta,\lambda}}(\mu): \mu \in {\mathcal{P}}(\mathbb{R}_{>0})\}, \end{align*} $$

where $\Sigma _{V_{t,\theta ,\lambda }}(\mu )$ is the (Voiculescu’s) free entropy:

$$ \begin{align*}\Sigma_{V_{t,\theta,\lambda}}(\mu)=\iint_{\mathbb{R}_{>0}\times \mathbb{R}_{>0}} \log|x-y|\mu(\mathrm{d} x)\mu(\mathrm{d} y) - \int_{\mathbb{R}_{>0}} V_{t,\theta,\lambda}(x) \mu(\mathrm{d} x). \end{align*} $$

In particular, for $a,b>1$ , the measure $f\beta '(a,b)$ is the unique maximizer of the free entropy $\Sigma _{V_{a,b}}$ , where

$$ \begin{align*}V_{a,b}(x)=(1-a)\log x+ (a+b)\log (1+x), \qquad x>0, \end{align*} $$

see also Corollary 5.6.

In Section 6, we investigate algebraic properties of noncommutative random variables $G^{(p)} \sim \eta _{p,1}$ , which we refer to as Meixner-type free beta–gamma algebras. Consider freely independentFootnote 3 noncommutative random variables $G_1^{(p)} \sim \eta _{p,1}$ and $G_2^{(q)} \sim \eta _{q,1}$ . Since the Meixner-type free gamma distributions satisfy the convolution identity $\eta _{p,1}\boxplus \eta _{q,1}= \eta _{p+q,1}$ , we obtain the following distributional identity:

$$ \begin{align*}G_1^{(p)} + G_2^{(q)} \overset{\mathrm{d}}{=} G_3^{(p+q)} \sim \eta_{p+q,1}, \end{align*} $$

where $X \overset {\mathrm {d}}{=} Y $ denotes equality in distribution. This identity highlights the additive stability of the Meixner-type free gamma distributions under free convolution. It is worth noting that the three parameters $(t, \theta , \lambda )$ of the generalized Meixner-type free gamma distributions play a crucial role in understanding the algebraic structure of the random variables $G^{(p)}$ .

Theorem 1.4 (Meixner-type free beta–gamma algebras, see Section 6)

We denote by $G^{(p)}\sim \eta _{p,1}$ a noncommutative random variable and let us set free copies $\{G_1^{(p)},G_2^{(p)},\dots \}$ from $G^{(p)}$ for each $p>0$ . Then,

  • $(G_2^{(q)})^{-\frac {1}{2}} G_1^{(p)} (G_2^{(q)})^{-\frac {1}{2}} \sim \eta _{p,1}\boxtimes (\eta _{q,1})^{\langle -1 \rangle } = D_{\frac {1+q}{q^2}}\left ( \mu _{p,1,1+\frac {p}{1+q}}\right )$ for $p,q>0$ .

  • For any $p>0$ and $n\in \mathbb {N}$ ,

    $$ \begin{align*}\left(\frac{1}{G_1^{(p)}}+\frac{1}{G_2^{(p)}}+\cdots+ \frac{1}{G_{2^n}^{(p)}}\right)^{-1} \overset{\mathrm{d}}{=}\left(2^n+\frac{2^n-1}{p}\right)^{-2} G^{(2^np+2^n-1)}. \end{align*} $$
  • In the case of $p=\frac {1}{2(m-1)}$ for some natural number $m\ge 2$ , we have

    $$ \begin{align*}\left(\frac{1}{G_1^{(p)}}+\frac{1}{G_2^{(p)}}\right)^{-1} \overset{\mathrm{d}}{=} \frac{1}{4m^2} (G_1^{(2p)} + \cdots + G_m^{(2p)}). \end{align*} $$
  • The law $\mu _p$ of the Meixner-type free beta random variable

    $$ \begin{align*}B^{(p)}:=\{(G_1^{(p)})^{-1} +(G_2^{(p)})^{-1}\}^{-\frac{1}{2}} (G_1^{(p)})^{-1} \{(G_1^{(p)})^{-1} +(G_2^{(p)})^{-1}\}^{-\frac{1}{2}} \end{align*} $$
    has the following R-transform:
    $$ \begin{align*}R_{\mu_p}(z)= \frac{p(z-p^3)- \sqrt{(3p+2)^2 z^2 -2p^5 z + p^8}}{2z}, \qquad z \in \left(-\frac{p^3}{2(p+1)},0\right). \end{align*} $$

    Moreover, $\mu _p$ is not freely infinitely divisible for any $p>0$ .

In Section 7, using the method of finite free probability, which is an approximation theory in free probability that has attracted attention in recent years (see [Reference MarcusMar21, Reference Marcus, Spielman and SrivastavaMSS22]), we demonstrate that the asymptotic behavior of the roots of some Jacobi and Bessel polynomials, as their degree becomes sufficiently large, can be understood through the generalized Meixner-type free gamma distributions. The key point that aids understanding is the use of the “finite S-transform” recently introduced by the second author [Reference Arizmendi, Fujie, Perales and UedaAFPU24]. Specifically, it involves investigating the relationship between the finite S-transform of Jacobi polynomials or Bessel polynomials and the S-transform of the generalized Meixner-type free gamma distribution derived in Section 4.

2 Preliminaries

2.1 Harmonic analysis in free probability

In this article, we employ the framework of free harmonic analysis as introduced by [Reference Bercovici and VoiculescuBV93] (see also Chapter 3 in [Reference Mingo and SpeicherMS17]). To compare with classical probability theory, infinite divisibility is often used since there is the Bercovici–Pata bijection, a mapping that relates classical infinitely divisible distributions to their free counterparts (see [Reference Barndorff-Nielsen and ThorbjørnsenBNT02, Reference Barndorff-Nielsen and ThorbjørnsenBNT06, Reference Bercovici and PataBP99] for details). From this perspective, we introduce the necessary tools for our analysis in the following sections.

A probability measure $\mu $ on $\mathbb {R}$ is called freely infinitely divisible if for any $n\in \mathbb {N}$ there exists a probability measure $\mu _{n}\in {\mathcal {P}}(\mathbb {R})$ such that

$$ \begin{align*} \mu = \underbrace{\mu_{n}\boxplus \dots \boxplus \mu_{n}}_{n \text{ times}}, \end{align*} $$

where $\boxplus $ denotes the free additive convolution, which can be defined as the distribution of sum of freely independent self-adjoint operators. In this case, $\mu _n\in {\mathcal {P}}(\mathbb {R})$ is uniquely determined for each $n\in \mathbb {N}$ . The freely infinite divisible distributions can be characterized as those admitting a Lévy–Khintchine representation in terms of R-transform, which is the free analog of the cumulant transform $C_{\mu }(z) := \log (\widehat {\mu }(z))$ , where $\widehat {\mu }$ is the characteristic function of $\mu \in {\mathcal {P}}(\mathbb {R})$ . This was originally established by Bercovici and Voiculescu in [Reference Bercovici and VoiculescuBV93] for all Borel probability measures. To explain it, we gather analytic tools for free additive convolution $\boxplus $ . In order to define the R-transform (or free cumulant transform) $R_\mu $ of $\mu \in {\mathcal {P}}(\mathbb {R})$ , we need to define its Cauchy–Stieltjes transform  $G_\mu $ :

$$ \begin{align*} G_\mu(z)=\int_{\mathbb{R}}\frac{1}{z-t}\,\mu(\mathrm{d} t), \qquad(z\in\mathbb{C}^+). \end{align*} $$

Note, in particular, that $\Im (G_\mu (z))<0$ for any z in $\mathbb {C}^+$ , and hence we may consider the reciprocal Cauchy transform $F_\mu \colon \mathbb {C}^{+}\to \mathbb {C}^{+}$ given by $F_{\mu }(z)=1/G_{\mu }(z)$ . For any $\mu \in {\mathcal {P}}(\mathbb {R})$ and any $\lambda>0,$ there exist positive numbers $\alpha ,\beta ,$ and M such that $F_{\mu }$ is univalent on the set $\Gamma _{\alpha ,\beta }:=\{z \in \mathbb {C}^{+} \,|\, \Im (z)>\beta , |\Re (z)|<\alpha \Im (z)\}$ and such that $F_{\mu }(\Gamma _{\alpha ,\beta })\supset \Gamma _{\lambda ,M}$ . Therefore, the right inverse $F^{\langle -1 \rangle }_{\mu }$ of $F_{\mu }$ exists on $\Gamma _{\lambda ,M}$ , and the R-transform (or free cumulant transform) $ R_\mu $ is defined by

$$ \begin{align*} R_{\mu}(w) =wF^{\langle -1 \rangle}_{\mu}\left(\frac{1}{w} \right)-1, \quad\text{for all } w \text{ such that } 1/w \in \Gamma_{\lambda,M}. \end{align*} $$

The free version of the Lévy–Khintchine representation now amounts to the statement that $\mu \in {\mathcal {P}}(\mathbb {R})$ is freely infinitely divisible if and only if there exist $a\ge 0$ , $\gamma \in \mathbb {R}$ and a Lévy measureFootnote 4 $\nu $ such that

(2.1) $$ \begin{align} R_{\mu}(w) = a w^{2}+\gamma w + \int_{\mathbb{R}}\left(\frac{1}{1- w x}-1-w x \mathbf{1}_{[-1,1]}(x)\right)\nu(\mathrm{d} x),\qquad w\in\mathbb{C}^-. \end{align} $$

The triplet $(a,\nu ,\gamma )$ is uniquely determined and referred to as the free characteristic triplet for $\mu $ , and the measure $\nu $ is referred to as the free Lévy measure for $\mu $ . Recently, free infinite divisibility has been proved for normal distributions [Reference Belinschi, Bożejko, Lehner and SpeicherBBLS11], some of the Boolean-stable distributions [Reference Arizmendi and HasebeAH14], some of the beta distributions, and some of the gamma distributions, including the chi-square distribution and powers of random variables distributed as these distributions [Reference HasebeHas14, Reference HasebeHas16] and generalized power distributions with free Poisson term [Reference Morishita and UedaMU20].

As one of the most important subclass of freely infinitely divisible distributions, we introduce the concepts of freely self-decomposable distributions. A probability measure $\mu $ on $\mathbb {R}$ is said to be freely self-decomposable if for any $c\in (0,1),$ there exists ${\mu _c \in \mathcal {P}(\mathbb {R})}$ such that $\mu = \mu _c \boxplus D_c(\mu )$ . It is easy to see that every freely self-decomposable distribution is freely infinitely divisible. Moreover, it is known that $\mu \in \mathcal {P}(\mathbb {R})$ is freely self-decomposable if and only if its free Lévy measure $\nu $ is formed by

$$ \begin{align*}\nu(\mathrm{d}x) = \frac{k(x)}{|x|} \mathbf{1}_{\mathbb{R}\setminus \{0\}}(x) \mathrm{d}x, \end{align*} $$

where a function k is nondecreasing on $(-\infty ,0)$ and nonincreasing on $(0,\infty )$ (see [Reference Barndorff-Nielsen and ThorbjørnsenBNT02] for details). Examples and properties of freely self-decomposable distributions were investigated by [Reference Hasebe, Sakuma and ThorbjørnsenHST19, Reference Hasebe and ThorbjørnsenHT16, Reference Hasebe and UedaHU23, Reference Maejima and SakumaMS23].

Let us consider $\mu \in \mathcal {P}(\mathbb {R}_{\ge 0}) \setminus \{\delta _0\}$ . We define the S-transform of $\mu $ by

$$ \begin{align*}S_\mu(z) := \frac{1+z}{z} \Psi_\mu^{\langle-1\rangle}(z), \qquad z \in \Psi_\mu (i \mathbb{C}^+), \end{align*} $$

where $\Psi _\mu $ is the moment generating function that is

$$ \begin{align*}\Psi_\mu(z):= \int_{\mathbb{R}_{>0}} \frac{xz}{1-xz} \mu(\mathrm{d}x), \qquad z\in \mathbb{C}\setminus \mathbb{R}_{\ge0}, \end{align*} $$

and $\Psi _\mu (i\mathbb {C}^+)$ is a region contained in the circle with diameter $(\mu (\{0\})-1,0)$ . One can see that

(2.2) $$ \begin{align} S_{D_c(\mu)}(z) = \frac{1}{c} S_\mu(z), \qquad c>0. \end{align} $$

For $\mu ,\nu \in \mathcal {P}(\mathbb {R}_{\ge 0})\setminus \{\delta _0\}$ , we obtain $S_{\mu \boxtimes \nu }=S_\mu S_\nu $ on the common domain in which three S-transforms are defined, where $\mu \boxtimes \nu $ is called the free multiplicative convolution, which is the distribution of multiplication $\sqrt {X}Y\sqrt {X}$ of freely independent positive random variables $X\sim \mu $ and $Y\sim \nu $ . It is known that

(2.3) $$ \begin{align} R_\mu(zS_\mu(z)) =z, \end{align} $$

for small enough z in a neighborhood of $(\mu (\{0\})-1,0)$ (see [Reference Bercovici and VoiculescuBV92, Reference Bercovici and VoiculescuBV93] for details).

Example 2.1 (Marchenko–Pastur distribution)

We define the function $k_{\theta ,\lambda }$ as

$$ \begin{align*}k_{\lambda,\theta}(x):= \frac{\sqrt{(a^+-x)(x-a^-)}}{2\pi \theta x} \mathbf{1}_{(a^-,a^+)}(x), \end{align*} $$

where $\lambda ,\theta>0$ and $a^\pm :=\theta (\sqrt {\lambda }\pm 1)^2$ , respectively. The Marchenko–Pastur law $\pi _{\lambda ,\theta }$ with the shape parameter $\lambda $ and the mean parameter $\theta $ is defined as the probability measure given by

$$ \begin{align*}\pi_{\lambda,\theta}(\mathrm{d}x) := \max\{0, 1-\lambda\}\delta_0 + k_{\lambda,\theta}(x)\mathrm{d}x. \end{align*} $$

It is known that

$$ \begin{align*}R_{\pi_{\lambda,\theta}}(z) = \frac{\theta\lambda z}{1-\theta z}, \qquad z\in \mathbb{C}^- \end{align*} $$

and

$$ \begin{align*}S_{\pi_{\lambda,\theta}}(z) = \frac{1}{\theta(\lambda +z)}, \qquad z \text{ in a neighborhood of } (-1+\pi_{\theta,\lambda}(\{0\}),0). \end{align*} $$

From the form of R-transform, we notice that, for any $\theta ,\lambda>0$ ,

$$ \begin{align*}\pi_{\lambda,\theta} =D_{\theta} (\pi_{1,1}^{\boxplus \lambda}). \end{align*} $$

According to [Reference Haagerup and SchultzHS07, Proposition 3.13], we have

(2.4) $$ \begin{align} S_{\mu^{\langle -1 \rangle}}(z) = \frac{1}{S_\mu(-z-1)}, \qquad z\in (-1,0). \end{align} $$

Example 2.2 (Free positive stable law with index $1/2$ )

By (2.4), for $\lambda \ge 1$ ,

$$ \begin{align*} S_{(\pi_{\lambda,\theta})^{\langle -1\rangle}}(z)= \frac{1}{S_{\pi_{\lambda,\theta}}(-z-1)} = \theta(\lambda-1-z), \qquad z\in (-1,0). \end{align*} $$

In particular, the measure $(\pi _{1,1})^{\langle -1\rangle }$ has a probability density $\frac {\sqrt {4x-1}}{2\pi x^2} \mathbf {1}_{(1/4,\infty )}(x) \mathrm {d} x$ and is well known as the free positive stable law with index $1/2$ , introduced by [Reference Bercovici and PataBP99].

2.2 Centered free Meixner distributions

In this section, we introduce the three-parameter family $\{\nu _{s,a,b}: s\ge 0, a \in \mathbb {R}, b\ge -1\} \subset {\mathcal {P}}(\mathbb {R})$ in which their Cauchy transform is given by

(2.5) $$ \begin{align} G_{\nu_{s,a,b}}(z) &= \cfrac{1}{z-\cfrac{s}{z-a-\cfrac{s+b}{z-a-\cfrac{s+b}{\ddots}}}}\end{align} $$
(2.6) $$ \begin{align} &= \frac{(s+2b)z+sa-s\sqrt{(z-a)^2-4(s+b)}}{2(bz^2+saz+s^2)}. \end{align} $$

The measure $\nu _{s,a,b}$ is called the centered free Meixner distribution. According to [Reference AnshelevichAns03, Reference Bożejko and BrycBB06, Reference Saitoh and YoshidaSY01], $\nu _{s,a,b}$ is freely infinitely divisible whenever $b\ge 0$ , and an integral representation for the R-transform of $\nu _{s, a,b}$ is given by

$$ \begin{align*}R_{\nu_{s, a,b}}(z) = \int_{\mathbb{R}} \frac{z^2}{1-zx}s \ w_{a,b}(x)\mathrm{d}x, \end{align*} $$

where

$$ \begin{align*}w_{a,b}(x) = \frac{1}{2\pi b}\sqrt{4b-(x-a)^2}\mathbf{1}_{[a-2\sqrt{b}, a+2\sqrt{b}]}(x) \end{align*} $$

is the density of Wigner’s semicircle law with mean $a\in \mathbb {R}$ and variance $b\ge 0$ . Note that, the above R-transform differs from [Reference AnshelevichAns03, Reference Bożejko and BrycBB06, Reference Saitoh and YoshidaSY01] by a factor of z. In this case, one can see that $\nu _{s,a,b}= \nu _{1,a,b}^{\boxplus s}$ . By [Reference Bożejko and BrycBB06, Equation (4)], the R-transform of $\nu _{s,a,b}$ admits the explicit form

(2.7) $$ \begin{align} R_{\nu_{s,a,b}}(z)=\frac{2sz^2}{1-az+\sqrt{(1-az)^2-4bz^2}}, \qquad b\neq 0 \end{align} $$

and in the case $b=0$ , it reduces to

$$ \begin{align*}R_{\nu_{s,a,0}}(z) = \frac{sz^2}{1-az}. \end{align*} $$

According to [Reference Bożejko and BrycBB06, Theorem 3.2], the centered free Meixner law $\nu _{1,a,b}$ coincides with one of the following measures:

  • the Wigner’s semicircle law if $a=b=0$ ;

  • the Marchenko–Pastur distribution if $b=0$ and $a\neq 0$ ;

  • the free Pascal (negative binomial) distribution if $b>0$ and $a^2>4b$ ;

  • the free gamma distribution if $b>0$ and $a^2=4b$ ;

  • the pure free Meixner distribution if $b>0$ and $a^2<4b$ ;

  • the free binomial distribution if $-\min \{\alpha , 1-\alpha \} \le b <0$ , where $\alpha =\int _{\mathbb {R}} x^2\ \nu _{1,a,b}(\mathrm { d} x)$ .

By (1.1) and (2.7), the Meixner-type free gamma distribution $\eta _{t,\theta }$ can be expressed in terms of the centered free Meixner law (the free gamma distribution in the sense described above) as

$$ \begin{align*}\eta_{t,\theta} = \nu_{t\theta^2, 2\theta, \theta^2} \boxplus \delta_{t\theta}, \qquad t,\theta>0. \end{align*} $$

More generally, we show that the generalized Meixner-type free gamma distribution can be represented as a centered free Meixner law under a shift (see Proposition 3.2).

2.3 Entropy functionals with potentials

Assume that V is a $C^1$ -potential function V satisfying

$$ \begin{align*}V(x) \ge (1+\delta)\log (x^2+1), \qquad x\in \mathbb{R}, \qquad \text{for some } \delta>0, \end{align*} $$

and $\mathcal {Z} := \int e^{-V(x)}\mathrm {d}x<\infty $ .

By the Lagrangian multiplier method, it is known that the Gibbs distribution $\frac {1}{\mathcal {Z}} \exp \{-V(x)\}$ is a unique probability density which maximizes the Shannon entropy associated with the potential function V:

$$ \begin{align*}H_V(p):= -\int p(x)\log p(x)\mathrm{d}x - \int V(x) p(x)\mathrm{d}x, \end{align*} $$

among all probability density functions p on $\mathbb {R}$ .

According to [Reference JohanssonJoh98], it is known that for the above potential function V, the free entropy functional (see [Reference VoiculescuVoi93])

$$ \begin{align*}\Sigma_V(\mu) := \iint \log |x-y| \mu (\mathrm{d}x) \mu (\mathrm{d}y) - \int V(x) \mu(\mathrm{d}x), \end{align*} $$

among all probability measures $\mu $ on $\mathbb {R}$ , is known to be finite and has a unique maximizer $\mu _V$ (namely, the equilibrium measure of $\Sigma _V$ ). The support of $\mu _V$ is compact. Moreover, $\mu _V$ satisfies the following equation:

$$ \begin{align*}\mathcal{H} \mu_V(x) = \frac{1}{2} V'(x), \qquad x\in \text{supp}(\mu_V), \end{align*} $$

where $\mathcal {H}\mu $ is the Hilbert transform of a probability measure $\mu $ on $\mathbb {R}$ , that is,

$$ \begin{align*}\mathcal{H}\mu (x) :=\text{p.v.} \int \frac{1}{x-y} \mu(\mathrm{d} y) = \lim_{\varepsilon \to 0} \left(\int_{-\infty}^{x-\varepsilon} + \int_{x+\varepsilon}^\infty\right)\frac{1}{x-y} \mu(\mathrm{d} y), \quad x\in \mathbb{R}, \end{align*} $$

see also [Reference Saff and TotikST97, p. 27, Theorem 1.3] and [Reference BianeBia03, Equation (3.4)].

In connection with the above discussion, Hasebe and Szpojankowski [Reference Hasebe and SzpojankowskiHS19] pointed out a correspondence between the measure that maximizes the Shannon entropy and the equilibrium measure of the free entropy, from the perspective of maximizing entropy functionals with a potential. We call it the potential correspondence Footnote 5 in this article. In [Reference Hasebe and SzpojankowskiHS19], it was observed that the potential correspondence maps the classical generalized inverse Gaussian (GIG) distributions to the free GIG distributions introduced in [Reference FèralFer06]. Moreover, this correspondence maps the normal distributions $N(\mu ,\sigma ^2)$ to Wigner’s semicircle laws $w_{\mu ,\sigma ^2}(x)\mathrm {d} x$ for $\mu \in \mathbb {R}$ and $\sigma>0$ , and the gamma distribution $\gamma _{\lambda ,\theta }$ to the Marchenko–Pastur distributions $\pi _{\lambda ,\theta }$ for $\theta>0$ and $\lambda \ge 1$ .

3 Generalized Meixner-type free gamma distributions

Recall the definition of generalized Meixner-type free gamma distributions.

Definition 3.1 Consider $t,\theta>0$ and $\lambda \ge 1$ . The generalized Meixner-type free gamma distribution $\mu _{t,\theta ,\lambda }$ is the probability measure whose R-transform is given by

$$ \begin{align*}R_{\mu_{t,\theta,\lambda}}(z) = \int_{\mathbb{R}}\left(\frac{1}{1-zx}-1\right) \frac{t k_{\lambda,\theta}(x)}{x}\mathrm{d}x, \quad z\in \mathbb{C}^-. \end{align*} $$

Here, $k_{\lambda ,\theta }(x)$ denotes the density of the Marchenko–Pastur distribution, explicitly given by

$$ \begin{align*}k_{\lambda,\theta}(x):= \frac{\sqrt{(a^+-x)(x-a^-)}}{2\pi \theta x} \mathbf{1}_{(a^-,a^+)}(x), \end{align*} $$

with $a^\pm :=\theta (\sqrt {\lambda }\pm 1)^2$ .

3.1 Relation with centered free Meixner distributions

We establish an important connection between the centered free Meixner distributions $\nu _{s,a,b}$ (defined in Section 2.2) and the generalized Meixner-type free gamma distributions $\mu _{t,\theta ,\lambda }$ .

Proposition 3.2 For $t,\theta>0$ and $\lambda \ge 1$ , we have

$$ \begin{align*} \mu_{t,\theta,\lambda} = \nu_{t\theta\lambda, \theta(\lambda+1), \theta^2\lambda} \boxplus \delta_t. \end{align*} $$

Proof A direct computation shows that

$$ \begin{align*} R_{\nu_{t\theta\lambda, \theta(\lambda+1), \theta^2\lambda} \boxplus \delta_t}(z) &= R_{{\nu_{t\theta\lambda, \theta(\lambda+1), \theta^2\lambda}} } (z) + tz\\ &=\int_{\mathbb{R}} \frac{z^2}{1-zx} \cdot t\theta\lambda \cdot w_{\theta(\lambda+1),\theta^2\lambda}(x)\mathrm{d}x + tz\\ &=\int_{\mathbb{R}} \frac{xz^2}{1-zx} \cdot t k_{\lambda,\theta}(x)\mathrm{d}x+tz\\ &=tz \int_{\mathbb{R}} \frac{1}{1-zx} k_{\lambda,\theta}(x)\mathrm{d}x\\ &=\int_{\mathbb{R}} \left(\frac{1}{1-zx}-1\right) \frac{tk_{\lambda,\theta}(x)}{x} \mathrm{d}x = R_{\mu_{t,\theta,\lambda}}(z), \end{align*} $$

as desired.

Remark 3.3 According to Proposition 3.2, the generalized Meixner-type free gamma distribution $\mu _{t,\theta ,\lambda }$ coincides, up to a shift, with the centered free Meixner distribution $\nu _{s,a,b}$ when the parameters satisfy

$$ \begin{align*}s=t\theta\lambda, \quad a=\theta(\lambda+1), \quad \text{and} \quad b=\theta^2\lambda. \end{align*} $$

In this setting, since $a^2\ge 4b$ , the measure $\nu _{s,a,b}$ is either the free Pascal distribution (when $a^2>4b$ ) or the free gamma distribution (when $a^2=4b$ ) (see [Reference Bożejko and BrycBB06, p. 65]). Consequently, the measure $\mu _{t,\theta ,\lambda }$ can be interpreted as a suitably shifted version of either the free Pascal or the free gamma distribution.

It follows from (2.7) and Proposition 3.2 that

(3.1) $$ \begin{align} R_{\mu_{t,\theta,\lambda}}(z) =t \cdot \frac{1+\theta(1-\lambda)z -\sqrt{(1+\theta(1-\lambda)z)^2-4\theta z}}{2\theta}. \end{align} $$

Due to the above representation of R-transform, we can understand the second parameter $\theta $ for the measure $\mu _{t,\theta ,\lambda }$ . Since

$$ \begin{align*} R_{\mu_{t,\theta,\lambda}}(z) = \frac{1}{\theta} R_{\mu_{t,1,\lambda}}(\theta z) =\frac{1}{\theta} R_{D_\theta(\mu_{t,1,\lambda})}(z) = R_{D_\theta(\mu_{t,1,\lambda})^{\boxplus \frac{1}{\theta}}}(z), \end{align*} $$

we get

$$ \begin{align*}\mu_{t,\theta,\lambda}= D_\theta(\mu_{t,1,\lambda})^{\boxplus \frac{1}{\theta}}. \end{align*} $$

We will explain the meaning of the third parameter $\lambda $ in Theorem 4.3.

3.2 Density, atom, and moments

In this section, we investigate the density, atom, and moments of $\mu _{t,\theta ,\lambda }$ . Thanks to (2.6) and Proposition 3.2, it is straightforward to see that

$$ \begin{align*}G_{\mu_{t,\theta,\lambda}}(z)=\frac{(t+2\theta)z-t(t-\theta(\lambda-1)) -t \sqrt{(z-\alpha^-)(z-\alpha^+)}}{2\theta z (z+t(\lambda-1))}, \qquad z\in \mathbb{C}^+, \end{align*} $$

where

(3.2) $$ \begin{align} \alpha^\pm:= \theta(\lambda+1)+t \pm 2\sqrt{\theta\lambda(\theta+t)}. \end{align} $$

The Stieltjes-inversion formula (see [Reference SchmüdgenSch12, Theorem F.6]) implies that

(3.3) $$ \begin{align} \frac{\mathrm{d}\mu_{t,\theta,\lambda}}{\mathrm{d}x}(x) = \frac{t\sqrt{(x-\alpha^-)(\alpha^+-x)}}{2\pi \theta x (x +t(\lambda-1))} \mathbf{1}_{[\alpha^-,\alpha^+]}(x). \end{align} $$

Since

$$ \begin{align*} \lim_{z\to 0} z G_{\mu_{t,\theta,\lambda}}(z) = \frac{-t(t-\theta(\lambda-1)) + t|t-\theta(\lambda-1)|}{2t\theta(\lambda-1)}, \end{align*} $$

we have

(3.4) $$ \begin{align} \mu_{t,\theta,\lambda}(\{0\}) = \begin{cases} 0, & 1 \le \lambda \le 1+t/\theta\\ \\ 1-\dfrac{t}{\theta(\lambda-1)}, & \lambda> 1+t/\theta. \end{cases} \end{align} $$

In particular, $\mu _{t,\theta ,\lambda }$ has no singular continuous part since it is freely infinitely divisible (see [Reference Belinschi and BercoviciBB04, Theorem 3.4]). We summarize the above result as follows.

Proposition 3.4 (Density and atom)

For $t,\theta>0$ and $\lambda \ge 1$ , we get

$$ \begin{align*}\mu_{t,\theta,\lambda}(\mathrm{d} x) = \max\left\{0, 1-\dfrac{t}{\theta(\lambda-1)}\right\} \delta_0 (\mathrm{d} x) + \frac{t\sqrt{(x-\alpha^-)(\alpha^+-x)}}{2\pi \theta x (x +t(\lambda-1))} \mathbf{1}_{[\alpha^-,\alpha^+]}(x)\mathrm{d} x. \end{align*} $$

Next, we compute the moments of $\mu _{t,\theta ,\lambda }$ :

$$ \begin{align*}m_n(\mu_{t,\theta,\lambda})= \int_{\mathbb{R}} x^n \mu_{t,\theta,\lambda}(\mathrm{d}x), \qquad n\ge 1. \end{align*} $$

To obtain $m_n(\mu _{t,\theta ,\lambda })$ , we first compute its n-th free cumulant $\kappa _n(\mu _{t,\theta ,\lambda })$ , which is defined as the coefficient of $z^n$ in the power series expansion of the R-transform $R_{\mu _{t,\theta ,\lambda }}(z)$ . By the computation in the proof of Proposition 3.2, we have

$$ \begin{align*} R_{\mu_{t,\theta,\lambda}}(z) &=tz \int_{\mathbb{R}} \frac{1}{1-zx} k_{\lambda,\theta}(x)\mathrm{d} x = \sum_{n=0}^\infty tm_n(\pi_{\lambda,\theta})z^{n+1}, \end{align*} $$

where $m_0(\pi _{\lambda ,\theta })=1$ . Comparing the coefficients of $z^n$ then yields the following result:

(3.5) $$ \begin{align} \kappa_1(\mu_{t,\theta,\lambda}) &= t; \end{align} $$
(3.6) $$ \begin{align} \kappa_{n+1}(\mu_{t,\theta,\lambda}) &= t m_n(\pi_{\lambda,\theta}) = \frac{t\theta^{n}}{n} \sum_{k=0}^{n-1} \binom{n}{k}\binom{n}{k+1} \lambda^k, \qquad n\ge1. \end{align} $$

Proposition 3.5 (Moments)

Consider $t,\theta>0$ and $\lambda \ge 1$ . Then, $m_1(\mu _{t,\theta ,\lambda })=t$ and for $n\ge 2$ ,

$$ \begin{align*} m_n(\mu_{t,\theta,\lambda})=\sum_{m=1}^n \sum_{\substack{r_1,\dots, r_n\ge 0\\ r_1+\cdots+ r_n=m \\ r_1+ 2 r_2 + \cdots + nr_n=n}}P_m^{(n)}(r_1,\dots, r_n)t^m \theta^{n-m} \prod_{s=1}^{n-1} \left( \frac{1}{s}\sum_{k=0}^{s-1} \binom{s}{k}\binom{s}{k+1}\lambda^{k}\right)^{r_{s+1}}, \end{align*} $$

where

$$ \begin{align*}P_m^{(n)}(r_1,\dots, r_n):=\frac{n!}{r_1!r_2!\cdots r_n! ( n - m + 1)!}. \end{align*} $$

Proof One can observe that $m_1(\mu _{t,\theta ,\lambda })=\kappa _1(\mu _{t,\theta ,\lambda })=t$ by (3.5). Let us consider ${n\ge 2}$ . It is known that the number of non-crossing partitions with $r_1$ blocks of size $1$ , $r_2$ blocks of size $2$ , $\dots $ , $r_n$ blocks of size n equals $P_m^{(n)}(r_1,\dots , r_n)$ , where $r_1+r_2+\cdots + r_n=m$ . By the moment-cumulant formula (cf. [Reference Nica and SpeicherNS06, Proposition 11.4]) and (3.6), for $n\ge 2$ , we obtain

$$ \begin{align*} &m_n(\mu_{t,\theta,\lambda}) = \sum_{\pi \in \mathcal{NC}(n)} \prod_{V\in \pi} \kappa_{|V|}(\mu_{t,\theta,\lambda})\\ &\quad=\sum_{m=1}^n\sum_{\substack{r_1,\dots, r_n\ge 0\\ r_1+\cdots+ r_n=m \\ r_1+ 2 r_2 + \cdots + nr_n=n}}P_m^{(n)}(r_1,\dots, r_n)\kappa_1(\mu_{t,\theta,\lambda})^{r_1}\kappa_2(\mu_{t,\theta,\lambda})^{r_2}\dots \kappa_n(\mu_{t,\theta,\lambda})^{r_n}\\ &\quad=\sum_{m=1}^n \sum_{\substack{r_1,\dots, r_n\ge 0\\ r_1+\cdots+ r_n=m \\ r_1+ 2 r_2 + \cdots + nr_n=n}}P_m^{(n)}(r_1,\dots, r_n) t^m \theta^{n-m} \prod_{s=1}^{n-1} \left( \frac{1}{s}\sum_{k=0}^{s-1} \binom{s}{k}\binom{s}{k+1}\lambda^{k}\right)^{r_{s+1}}. \end{align*} $$

Example 3.6 Consider $t, \theta>0$ and $\lambda \ge 1$ . Let us set $m_n:=m_n(\mu _{t,\theta ,\lambda })$ for short. By Theorem 3.5, we raise the first four moments:

  • $m_1=t$ ;

  • $m_2=t^2+\theta t$ ;

  • $m_3 =t^3+3\theta t^2 + \theta ^2(1+\lambda ) t$ ;

  • $m_4=t^4 + 6\theta t^3 + 2\theta ^2(3+2\lambda ) t^2 + \theta ^3(1+3\lambda +\lambda ^2) t$ .

In Section 4.2, we will notice that the measure $\mu _{t,\theta ,\lambda }$ coincides with a certain scaled free beta prime distribution introduced by [Reference YoshidaYos20] for $t,\theta>0$ and $\lambda>1$ . According to [Reference YoshidaYos20, Theorem 6.1], another combinatorial representation of $m_n(\mu _{t,\theta ,\lambda })$ will be obtained (see Corollary 4.4 later).

3.3 Free self-decomposability and unimodality

One can easily see free self-decomposability for the measure $\mu _{t,\theta ,\lambda }$ .

Proposition 3.7 (Free self-decomposability)

Let us consider $t,\theta>0$ and $\lambda \ge 1$ . The measure $\mu _{t,\theta ,\lambda }$ is freely self-decomposable if and only if $\lambda =1$ .

Proof If $\lambda =1$ , then the function $tk_{\lambda ,\theta } (x)$ is non-increasing on $(0,\infty )$ , and therefore the measure $\mu _{t,\theta ,1}$ is freely self-decomposable for all $t,\theta>0$ . For $\lambda>1$ , the function $k_{\lambda ,\theta }(x)$ is supported on $(a^-,a^+)$ and $a^->0$ . Hence, $\mu _{t,\theta ,\lambda }$ is not freely self-decomposable for any $t>0$ and $\theta>0$ .

A probability measure $\mu $ on $\mathbb {R}$ is said to be unimodal if there exist $a\in \mathbb {R}$ and a density function f, which is nondecreasing on $(-\infty , a)$ and nonincreasing on $(a,\infty )$ , such that

$$ \begin{align*}\mu (\mathrm{d} x) = \mu (\{a\}) \delta_a + f(x) \mathrm{d}x. \end{align*} $$

According to [Reference Hasebe and ThorbjørnsenHT16, Theorem 1], every freely self-decomposable distribution is unimodal. Hence, $\mu _{t,\theta ,1}$ is unimodal for any $t,\theta>0$ by Proposition 3.7. For given $t,\theta>0$ , we investigate the values of $\lambda $ for which $\mu _{t,\theta ,\lambda }$ remains unimodal.

Proposition 3.8 (Unimodality)

For given $t,\theta>0$ , the measure $\mu _{t,\theta ,\lambda }$ is unimodal if and only if $1\le \lambda \le 1+t/\theta $ .

Proof If $1\le \lambda < 1+t/\theta $ , we get $\mu _{t,\theta ,\lambda }(\mathrm {d}x) = \frac {t}{2\pi \theta } f(x)dx$ by (3.3), where

$$ \begin{align*}f(x)=\frac{\sqrt{(x-\alpha^-)(\alpha^+-x)}}{x(x+t(\lambda-1))}, \qquad x\in (\alpha^-,\alpha^+), \end{align*} $$

and $\alpha ^\pm $ is defined by (3.2). By elementary calculus, we obtain

$$ \begin{align*}f'(x)=\frac{k(x)}{2x^2(x+t(\lambda-1))^2\sqrt{(x-\alpha^-)(\alpha^+-x)}}, \end{align*} $$

where

$$ \begin{align*}k(x):=2x^3 -3(\alpha^++\alpha^-)x^2+\{4\alpha^+\alpha^- -t(\lambda-1)(\alpha^++\alpha^-)\}x+2\alpha^+\alpha^-t(\lambda-1). \end{align*} $$

We show that there exists a unique solution $x \in (\alpha ^-,\alpha ^+)$ of the equation $k(x)=0$ . Since

$$ \begin{align*} k(\alpha^-)&=\{\alpha^- +t (\lambda-1) \}\alpha^-(\alpha^+-\alpha^-)>0,\\ k(\alpha^+)&=\{\alpha^+ +t (\lambda-1) \}\alpha^+(\alpha^--\alpha^+)<0, \end{align*} $$

it follows from the intermediate value theorem that there exists at least one solution $x\in (\alpha ^-,\alpha ^+)$ to the equation $k(x)=0$ . Next, we establish the uniqueness of solutions to $k(x)=0$ . To this end, we show that the function $k(x)$ is monotone on the interval $(\alpha ^-, \alpha ^+)$ . Since

$$ \begin{align*} k'(x) &= 6x^2-6(\alpha^++\alpha^-) x + \{4\alpha^+\alpha^- - t(\lambda-1)(\alpha^++\alpha^-)\}\\ &= 6\left(x-\frac{\alpha^++\alpha^-}{2}\right)^2 -\frac{3}{2}(\alpha^++\alpha^-)^2 + 4\alpha^+\alpha^- -t(\lambda-1)(\alpha^++\alpha^-)\\ &\le 6\left(\alpha^+-\frac{\alpha^++\alpha^-}{2}\right)^2 -\frac{3}{2}(\alpha^++\alpha^-)^2 + 4\alpha^+\alpha^- -t(\lambda-1)(\alpha^++\alpha^-)\\ &=-2\alpha^+\alpha^- - t(\lambda-1)(\alpha^++\alpha^-)<0, \end{align*} $$

the function $k(x)$ is strictly decreasing on $(\alpha ^-,\alpha ^+)$ . Hence, the equation $k(x)=0$ has a unique solution in $(\alpha ^-,\alpha ^+)$ , denoted by $x_0\in (\alpha ^-,\alpha ^+)$ . Consequently, the function $f(x)$ is strictly increasing on $(\alpha ^-,x_0)$ and strictly decreasing on $(x_0,\alpha ^+)$ , implying that $\mu _{t,\theta ,\lambda }$ is unimodal with mode $x_0$ .

If $\lambda = 1+t/\theta $ , then $\alpha ^-=0$ and $\alpha ^+=4(\theta +t)$ . In this case, one can verify that $f'(x)<0$ for all $x\in (0,\alpha ^+)$ . Hence, $\mu _{t,\theta ,1+t/\theta }$ is unimodal with mode $0$ .

If $\lambda> 1+t/\theta $ , then $\mu _{t,\theta ,\lambda }$ has an atom at $0$ by (3.4). However, the absolutely continuous part of $\mu _{t,\theta ,\lambda }$ possesses a mode $x_0$ , as shown above. Therefore, in this case, $\mu _{t,\theta ,\lambda }$ is not unimodal.

Remark 3.9 According to the proof of Proposition 3.8, if $1\le \lambda <1+t/\theta $ , then the density function of $\mu _{t,\theta ,\lambda }$ is bounded by $f(x_0)$ . In contrast, the density function of $\mu _{t,\theta ,1+t/\theta }$ is unbounded since $f(x)\to \infty $ as $x\to 0^+$ .

3.4 Background driving free Lévy process

Let $\mu $ be a freely self-decomposable distribution on $\mathbb {R}$ . By [Reference Barndorff-Nielsen and ThorbjørnsenBNT06, Theorem 6.5], there exists a free Lévy processFootnote 6 $\{Z_t\}_{t\ge 0}$ affiliated with some $W^\ast $ -probability space such that

$$ \begin{align*}\mu = \mathcal{L}\left( \int_0^\infty e^{-t} \mathrm{d}Z_t \right) \end{align*} $$

and the free Lévy measure $\nu $ of the law $\mathcal {L}(Z_1)$ satisfies

$$ \begin{align*}\int_{\mathbb{R}\setminus[-1,1]} \log(1+|x|) \nu(\mathrm{d}x) <\infty, \end{align*} $$

where $\int _0^\infty e^{-t} \mathrm {d}Z_t$ is the free stochastic integral with respect to $\{Z_t\}_{t\ge 0}$ (see [Reference Barndorff-Nielsen and ThorbjørnsenBNT06, Section 6] and [Reference Maejima and SakumaMS23] for details). The free Lévy process $\{Z_t\}_{t\ge 0}$ is called the background driving free Lévy process of $\mu $ .

By Proposition 3.7, the measure $\mu _{1,\theta ,1}$ is freely self-decomposable. From the construction above, we can then consider the background driving free Lévy process $\{Z_t\}_{t\ge 0}$ of $\mu _{1,\theta ,1}$ .

Lemma 3.10 Let $\{Z_t\}_{t\ge 0}$ be the free Lévy process above. Then,

  1. (1) For $t\ge 0$ , we have

    $$ \begin{align*}R_{Z_t}(z) = \frac{tz}{\sqrt{1-4\theta z}}, \qquad z\in \mathbb{C}^-. \end{align*} $$
  2. (2) The free Lévy measure of the law of $Z_t$ is given by

    $$ \begin{align*}\frac{t}{\pi x\sqrt{x(4\theta -x)}}\mathbf{1}_{(0,4\theta)}(x)\mathrm{d}x. \end{align*} $$

Proof (1) Recall that

$$ \begin{align*}R_{\mu_{1,\theta,1}}(z) = \frac{1-\sqrt{1-4\theta z}}{2\theta}, \qquad z\in \mathbb{C}^-. \end{align*} $$

By [Reference Maejima and SakumaMS23, Theorem 6.7], we have

$$ \begin{align*} R_{Z_t}(z) = tz \frac{d}{dz} R_{\mu_{1,\theta,1}}(z) = \frac{tz}{\sqrt{1-4\theta z}}. \end{align*} $$

(2) We can further compute the R-transform of $Z_t$ as follows:

$$ \begin{align*} R_{Z_t}(z) &= \frac{tz}{\sqrt{1-4\theta z}}=-\frac{t}{2\sqrt{\theta}} \frac{-z}{\sqrt{-z+\frac{1}{4\theta}}}\\ &=-\frac{t}{2\sqrt{\theta}} \int_{\frac{1}{4\theta}}^\infty \frac{-z}{-z+x} \cdot \frac{1}{\pi \sqrt{x-\frac{1}{4\theta}}}\mathrm{d}x = \int_0^{4\theta} \kern-1.5pt\left( \frac{1}{1-zx}-1\right) \frac{t}{\pi x \sqrt{x(4\theta -x)}}\mathrm{d}x, \end{align*} $$

where the third equality follows from [Reference Schilling, Song and VondracekSSV12, p. 304] or [Reference Maejima and SakumaMS23, Example 7.2]. Consequently, the Lévy measure of $Z_t$ is $\frac {t}{\pi x \sqrt {x(4\theta -x)}} \mathbf {1}_{(0,4\theta )}(x)\mathrm {d}x$ .

Remark 3.11 In [Reference Maejima and SakumaMS23, Example 7.2], the R-transform and the free Lévy measure of the Meixner-type free gamma distribution $\eta _{t,\theta }=\mu _{t\theta ,\theta ,1}$ were already investigated. In fact, the above lemma can be regarded as a generalization of [Reference Maejima and SakumaMS23, Example 7.2].

The regularity properties of the law of $Z_t$ can be analyzed as follows.

Corollary 3.12 For any $t>0$ , the law $\mathcal {L}(Z_t)$ is absolutely continuous with respect to Lebesgue measure with continuous density on $\mathbb {R}$ .

Proof Let $\nu _t$ be the free Lévy measure of $\mathcal {L}(Z_t)$ . Due to Lemma 3.10(2), we have

$$ \begin{align*} \nu_t(\mathbb{R}) &=\frac{t}{\pi} \int_0^{4\theta} x^{-\frac{3}{2}} (4\theta- x)^{-\frac{1}{2}} \mathrm{d}x = \infty. \end{align*} $$

According to [Reference Hasebe and SakumaHS17, Theorem 3.4], the measure $\mathcal {L}(Z_t)$ is absolutely continuous with respect to Lebesgue measure with continuous density on $\mathbb {R}$ .

Further, we can obtain the n-th free cumulant of the law of $Z_t$ .

Proposition 3.13 For $n\ge 1$ , we have

$$ \begin{align*}\kappa_1(Z_t)= t \qquad \text{and} \qquad \kappa_n(Z_t) = t(2\theta)^{n-1} \frac{(2n-3)!!}{(n-1)!} \quad \text{for} \quad n\ge 2. \end{align*} $$

Proof By Lemma 3.10(2), for z small enough (more strictly, $|z|<1/4\theta $ ), we have

$$ \begin{align*} R_{Z_t}(z) &= z \int_0^{4\theta} \frac{1}{1-zx} \frac{t}{\pi\sqrt{x(4\theta-x)}}\mathrm{d}x\\ &=z \int_0^1 \frac{1}{1-4\theta z u} \cdot \frac{t}{\pi\sqrt{u(1-u)}}\mathrm{d}u \qquad {(x=4\theta u)}\\ &=\frac{tz}{\pi} \int_0^1 u^{-\frac{1}{2}} (1-u)^{-\frac{1}{2}}(1-4\theta zu)^{-1}\mathrm{d}u\\ &=\frac{tz}{\pi} \frac{\Gamma(\frac{1}{2})^2}{\Gamma(1)} {}_2 F_1\left(1,\frac{1}{2}; 1; 4\theta z \right) \qquad \text{(by Euler integral representation)}\\ &=tz \sum_{n=0}^\infty \frac{(1)^{(n)}(\frac{1}{2})^{(n)}}{(1)^{(n)} n!} (4\theta z)^n \qquad {((x)^{(n)}:=x(x+1)\dots (x+n-1))}\\ &=tz + \sum_{n=2}^\infty t (2\theta)^{n-1}\frac{(2n-3)!!}{(n-1)!} z^n. \end{align*} $$

Comparing the coefficients of $z^n$ then yields the desired result.

3.5 Correlation of a free gamma process

In noncommutative setting, we can consider covariance and correlation as follows. Let $(\mathcal {A},\varphi )$ be a $C^\ast $ -probability space and $x,y\in \mathcal {A}$ . Then, their covariance is defined by

$$ \begin{align*}\text{Cov}(x,y) := \varphi(xy)-\varphi(x)\varphi(y). \end{align*} $$

It is easy to see that $\text {Cov}(x,y)=0$ if $x,y$ are free. Next, their correlation can be defined by

$$ \begin{align*}\text{Corr}(x,y) := \frac{\text{Cov}(x,y)}{\sqrt{\kappa_2(x)}\sqrt{\kappa_2(y)}}, \end{align*} $$

when $x,y$ have nonzero second free cumulant (variance). In general, we note that $\text {Corr}(x,y) \neq \text {Corr}(y,x)$ since $xy\neq yx$ .

Let $\{X_t\}_{t\ge 0}$ be a stochastic process in a $C^\ast $ -probability space. The process $\{X_t\}_{t\ge 0}$ is called a Meixner-type free gamma process if it is a free Lévy process whose marginal distribution at time $1$ is $\mu _{1,\theta ,\lambda }$ for some $\theta>0$ and $\lambda \ge 1$ . By the definition of free Lévy processes, it follows that $X_t\sim \mu _{t,\theta ,\lambda }$ for $t>0$ . Below, we compute the correlation of a free gamma process.

Proposition 3.14 (Correlation)

Let $\{X_t\}_{t\ge 0}$ be a Meixner-type free gamma process in a $C^\ast $ -probability space $(\mathcal {A},\varphi )$ . For any $s,t>0$ , we have

$$ \begin{align*} \text{Corr}(X_s, X_t)= \text{Corr}(X_t,X_s)= \sqrt{\frac{s}{t}}. \end{align*} $$

Proof For $s<t$ , we have

$$ \begin{align*} \text{Cov}(X_s,X_t) &= \varphi(X_s X_t) - \varphi(X_s)\varphi(X_t)\\ &= \varphi(X_s (X_t-X_s) + X_s^2) -\varphi(X_s)\varphi(X_t)\\ &= \varphi(X_s (X_t-X_s)) + \varphi(X_s^2)- \varphi(X_s)\varphi(X_t). \end{align*} $$

By Theorem 3.5 (or Example 3.6), we have $\varphi (X_s)=s$ and $\varphi (X_s^2)=s^2+\theta s$ . Since $\{X_t\}_{t\ge 0}$ is a free Lévy process, $X_s$ and $X_t-X_s$ are free, and hence $\varphi (X_s(X_t-X_s))=\varphi (X_s) \varphi (X_t-X_s)$ . Moreover, by the definition of a free Lévy process, $X_t-X_s \overset {\mathrm {d}}{=} X_{t-s}$ . Finally, we get

$$ \begin{align*} \text{Cov}(X_s,X_t) = s(t-s) + s^2+\theta s -st =\theta s. \end{align*} $$

By (3.5) (or Example 3.6 again), we obtain $\kappa _2(X_s)=\kappa _2(\mu _{s,\theta ,\lambda })=\theta s$ . Hence,

$$ \begin{align*}\text{Corr}(X_s,X_t) = \frac{\theta s} {\sqrt{\theta s} \sqrt{\theta t}} = \sqrt{\frac{s}{t}}. \end{align*} $$

Recalling the computation of their covariance, we observe that $\text {Cov}(X_t,X_s)=\text {Cov}(X_s,X_t)$ even if $s<t$ . Consequently, it also follows that $\text {Corr}(X_t,X_s)=\text {Corr}(X_s,X_t)$ .

If $s=t$ , then

$$ \begin{align*}\text{Cov}(X_s,X_s)=\varphi(X_s^2)-\varphi(X_s)^2=s^2+\theta s - s^2 = \theta s, \end{align*} $$

and therefore $\text {Corr}(X_s,X_s)=1$ .

In classical probability, it is known that the correlation of the gamma process $\{G_t\}_{t\ge 0}$ is

$$ \begin{align*}\text{Corr}(G_s,G_t) = \sqrt{\frac{s}{t}} \qquad \text{for} \qquad s,t>0. \end{align*} $$

For the reason, Proposition 3.14 is entirely analogous to the above classical result.

4 Free convolution formula and free beta prime distributions

4.1 S-transform

In this section, we compute the S-transform of $\mu _{t,\theta ,\lambda }$ .

Lemma 4.1 For $t,\theta>0$ and $\lambda \ge 1$ , we have

$$ \begin{align*}S_{\mu_{t,\theta,\lambda}} (z) = \frac{t-\theta z}{t(t+\theta(\lambda-1)z)}, \qquad z \in \left( -1 + \mu_{t,\theta,\lambda}(\{0\}),0 \right). \end{align*} $$

Proof A straightforward computation together with (2.3) and (3.1) shows that the compositional inverse of $R_{\mu _{t,\theta ,\lambda }} $ is

$$ \begin{align*}R_{\mu_{t,\theta,\lambda}}^{\langle -1 \rangle} (z) = \frac{z(\theta z-t)}{\theta t (1-\lambda)z -t^2}, \qquad z\in (-1+ \mu_{t,\theta,\lambda}(\{0\}),0), \end{align*} $$

which in turn implies that

$$ \begin{align*} S_{\mu_{t,\theta,\lambda}} (z) = \frac{R_{\mu_{t,\theta,\lambda}}^{\langle -1 \rangle} (z)}{z}= \frac{t-\theta z}{t(t+\theta (\lambda-1)z)}.\\[-42pt] \end{align*} $$

In particular, we show that the measure $\mu _{t,\theta ,1}$ is the inverse of Marchenko–Pastur distribution.

Lemma 4.2 For $t,\theta>0$ , we get $\mu _{t,\theta ,1} = \left (\pi _{1+\frac {t}{\theta },\frac {\theta }{t^2}} \right )^{\langle -1\rangle }$ .

Proof The desired formula follows from the S-transform of $(\pi _{1+t/\theta , \theta /t^2})^{\langle -1\rangle }$ . Actually, from Example 2.2 and Lemma 4.1, we have

$$ \begin{align*} S_{(\pi_{1+t/\theta, \theta/t^2})^{\langle -1\rangle}}(z) = \frac{\theta}{t^2}\left(\frac{t}{\theta}-z\right) = S_{\mu_{t,\theta,1}}(z).\\[-42pt] \end{align*} $$

4.2 Free convolution formula

Using the S-transform, we can analyze the effect of the third parameter $\lambda $ on the measure $\mu _{t,\theta ,\lambda }$ as follows.

Theorem 4.3 (Free convolution formula for $\mu _{t,\theta ,\lambda }$ )

Consider $t,\theta>0$ . If $\lambda>1$ , then

(4.1) $$ \begin{align} \mu_{t,\theta,\lambda} &=D_{t(\lambda-1)}\left(\pi_{q,1} \boxtimes (\pi_{1+\frac{t}{\theta},1})^{\langle -1\rangle} \right) \end{align} $$
(4.2) $$ \begin{align} &= \mu_{t,\theta,1} \boxtimes \pi_{q,q^{-1}}, \end{align} $$

where $q = \frac {t}{\theta (\lambda -1)}$ . In particular, $\mu _{t,\theta ,1+t/\theta }= \mu _{t,\theta ,1} \boxtimes \pi _{1,1}$ . Hence, the measure $\mu _{t,\theta ,1+t/\theta }$ belongs to the class of free compound Poisson distributions.

Proof By Examples 2.1 and 2.2 and Lemma 4.1, we have

$$ \begin{align*} S_{ D_{t(\lambda-1)}\left(\pi_{q,1} \boxtimes (\pi_{1+\frac{t}{\theta},1})^{\langle -1\rangle} \right)}(z) &=\frac{1}{t(\lambda-1)} \frac{1}{\frac{t}{\theta(\lambda-1)}+z} \left(\frac{t}{\theta}-z\right)\\ &=\frac{t-\theta z}{t(t+\theta(\lambda-1)z)}\\ &=S_{\mu_{t,\theta,\lambda}}(z). \end{align*} $$

Thus, equation (4.1) holds. Since $\pi _{\lambda ,\theta }=D_\theta (\pi _{\lambda ,1})$ , we obtain

$$ \begin{align*} \mu_{t,\theta,\lambda} &=D_{t(\lambda-1)}\left(\pi_{\frac{t}{\theta(\lambda-1)},1} \boxtimes (\pi_{1+\frac{t}{\theta},1})^{\langle -1\rangle} \right) \\ &=D_{t(\lambda-1)} \circ D_{\frac{t}{\theta(\lambda-1)}} \left( \pi_{\frac{t}{\theta(\lambda-1)},\frac{\theta(\lambda-1)}{t}} \boxtimes (\pi_{1+\frac{t}{\theta},1})^{\langle -1\rangle} \right)\\ &=\pi_{q,q^{-1}}\boxtimes (\pi_{1+\frac{t}{\theta},\frac{\theta}{t^2}})^{\langle-1\rangle}\\ &=\pi_{q,q^{-1}}\boxtimes \mu_{t,\theta,1}, \end{align*} $$

where the last equality follows from Lemma 4.2.

According to Theorem 4.3, for any $t,\theta>0$ and $\lambda>1$ , the measure $\mu _{t,\theta ,\lambda }$ coincides with

(4.3) $$ \begin{align} D_{t(\lambda-1)}\left(f\beta' \left(\frac{t}{\theta(\lambda-1)}, 1+\frac{t}{\theta} \right)\right), \end{align} $$

where

$$ \begin{align*}f\beta'(a, b): = \pi_{a,1}\boxtimes \pi_{b,1}^{\langle -1 \rangle}, \qquad a>0 \text{ and } b>1, \end{align*} $$

is the free beta prime distribution, introduced by Yoshida [Reference YoshidaYos20, Section 3.4]. Thus, by using [Reference YoshidaYos20, Theorem 6.1], we obtain a combinatorial formula for the moments of $\mu _{t,\theta ,\lambda }$ .

Corollary 4.4 For $t,\theta>0$ and $\lambda>1$ , the n-th moment of $\mu _{t,\theta ,\lambda }$ is given by

$$ \begin{align*}m_n(\mu_{t,\theta,\lambda}) = \theta^n \sum_{\pi \in \mathcal{NCL}(n)}\lambda^{|\pi|-\text{sg}(\pi)} \left(\frac{t}{\theta}\right)^{|\pi|-\text{dc}(\pi)}, \end{align*} $$

where $\mathcal {NCL}(n)$ is the set of all non-crossing linked partitions of $\{1,\dots ,n\}$ (see [Reference DykemaDyk07]), $\mathrm {sg}(\pi )$ is the number of singletons in $\pi $ , and $\mathrm {dc}(\pi )$ is the number of doubly covered elements by $\pi $ (see [Reference YoshidaYos20, Definition 5.7]).

4.3 Free beta prime distributions

In the previous section, we observed that $\mu _{t,\theta ,\lambda }$ is a suitably scaled free beta prime distribution when $\lambda>1$ . Using the fact, we now investigate the free beta prime distribution $f\beta '(a,b)$ for any $a>0$ and $b>1$ . A straightforward computation of the S-transform yields the following formula.

Proposition 4.5 For any $a>0$ and $b>1$ , we have

$$ \begin{align*}f\beta'(a,b)= \mu_{\frac{a}{b-1}, \frac{a}{(b-1)^2}, \frac{a+b-1}{a}}. \end{align*} $$

Proof It is easy to see that

(4.4) $$ \begin{align} S_{f\beta'(a,b)}(z)= \frac{b-1-z}{a+z} \qquad a>0, \ b>1. \end{align} $$

To determine $t,\theta>0$ and $\lambda>1$ from $a>0$ and $b>1$ , we compare the following equation:

$$ \begin{align*}\frac{b-1-z}{a+z} = \frac{t-\theta z}{t(t+\theta(\lambda-1)z)}, \end{align*} $$

where the RHS is the S-transform of some $\mu _{t,\theta ,\lambda }$ . Equivalently,

$$ \begin{align*} t(\lambda-1)=1, \quad -\theta a + t = -t^2 +t \theta(\lambda-1)(b-1) \quad \text{and} \quad ta = t^2(b-1). \end{align*} $$

Thus, one can see that $t=\frac {a}{b-1}$ , $\theta = \frac {a}{(b-1)^2}$ , and $\lambda =\frac {a+b-1}{a}$ , as desired.

From Proposition 4.5, we can identify analytic properties of the free beta prime distribution that were not discussed in [Reference YoshidaYos20].

Corollary 4.6 Let us consider $a>0$ and $b>1$ as follows. Then,

  1. (1) The free Lévy measure of $f\beta '(a,b)$ is given by

    $$ \begin{align*}\frac{a}{b-1} \frac{k_{A,B}(x)}{x}\mathrm{d} x, \end{align*} $$
    where $A=\frac {a+b-1}{a}$ and $B=\frac {a}{(b-1)^2}$ . Recall that $k_{A,B}(x)$ is the density function of the Marchenko–Pastur distribution $\pi _{A,B}$ .
  2. (2) $f\beta '(a,b)$ is not freely self-decomposable.

  3. (3) $f\beta '(a,b)$ is unimodal if and only if $a\ge 1$ .

Proof The free Lévy measure of $f\beta '(a,b)$ follows directly from the definition of $\mu _{t,\theta ,\lambda }$ and Proposition 4.5. Since $\frac {a+b-1}{a}>1$ , Proposition 3.7 together with Proposition 4.5 implies that $f\beta '(a,b)= \mu _{\frac {a}{b-1}, \frac {a}{(b-1)^2}, \frac {a+b-1}{a}}$ is not freely self-decomposable. Moreover, by Propositions 3.8 and 4.5, the measure $f\beta '(a,b)$ is unimodal if and only if

$$ \begin{align*}1 < \frac{a+b-1}{a} \le 1+ \frac{\frac{a}{b-1}}{\frac{a}{(b-1)^2}} = b, \end{align*} $$

which is equivalent to $a\ge 1$ .

5 Potential correspondence

Let $t,\theta>0$ and $\lambda \ge 1$ as follows. We consider the following potential function on $\mathbb {R}_{>0}$ :

$$ \begin{align*} V_{t,\theta,\lambda} (x):= \begin{cases} \left(2+\dfrac{t}{\theta}\right) \log x + \dfrac{t^2}{\theta x}, & \lambda=1,\\ \left(1-\dfrac{t}{\theta(\lambda-1)} \right) \log x + \left( 1 + \dfrac{t\lambda}{\theta(\lambda-1)}\right) \log (x+t(\lambda-1)), & \lambda>1. \end{cases} \end{align*} $$

One can verify that

$$ \begin{align*}{\mathcal{H}} \mu_{t,\theta,\lambda}(x) = \frac{1}{2}V_{t,\theta,\lambda}'(x), \qquad x \in [\alpha_-, \alpha_+], \end{align*} $$

where $\alpha _{\pm }$ was defined in (3.2). First, we analyze the Gibbs measure associated with $V_{t,\theta ,\lambda }$ in order to investigate analogous properties for $\mu _{t,\theta ,\lambda }$ . The ultimate goal of this section is to determine whether $\mu _{t,\theta ,\lambda }$ is the equilibrium measure of the free entropy associated with $V_{t,\theta ,\lambda }$ .

5.1 Gibbs measure associated with potential

We study the Gibbs measure associated with potential $V_{t,\theta ,\lambda }$ :

$$ \begin{align*}\rho_{t,\theta,\lambda}(\mathrm{d}x) = \frac{1}{\mathcal{Z}_{t,\theta,\lambda}} \exp\{ -V_{t,\theta,\lambda}(x)\} \mathrm{ d}x \quad \text{for} \quad t,\theta>0, \ \lambda\ge1, \end{align*} $$

where $\mathcal {Z}_{t,\theta ,\lambda }$ is the normalized constant (i.e., the partition function). We first present an explicit formula for the partition function $\mathcal {Z}_{t,\theta ,\lambda }$ as follows.

Lemma 5.1 Let us consider $t,\theta>0$ and $\lambda \ge 1$ . Then,

$$ \begin{align*} \mathcal{Z}_{t,\theta,\lambda}= \begin{cases} \left(\dfrac{t^2}{\theta}\right)^{-1-\frac{t}{\theta}} \Gamma\left(1+\dfrac{t}{\theta}\right), & \lambda=1\\ (t(\lambda+1))^{-\frac{t}{\theta}-1} B\left(\dfrac{t}{\theta(\lambda-1)}, 1+\dfrac{t}{\theta}\right), & \lambda>1. \end{cases} \end{align*} $$

Proof A simple computation leads to the desired results. Actually, if $\lambda =1$ , then

$$ \begin{align*} {\mathcal{Z}}_{t,\theta,\lambda} &= \int_0^\infty x^{-(2+\frac{t}{\theta})} e^{-\frac{t^2}{\theta x}}\mathrm{d}x\\ &= \left(\frac{t^2}{\theta}\right)^{-1-\frac{t}{\theta}} \int_0^\infty u^{\frac{t}{\theta}}e^{-u}\mathrm{d}u \qquad \text{(by putting } u= t^2/(\theta x))\\ &=\left(\frac{t^2}{\theta}\right)^{-1-\frac{t}{\theta}} \Gamma \left(1+\frac{t}{\theta}\right). \end{align*} $$

If $\lambda>1$ , then

$$ \begin{align*} {\mathcal{Z}}_{t,\theta,\lambda} &= \int_0^\infty x^{-1 +\frac{t}{\theta(\lambda-1)}} \left(x +t(\lambda-1)\right)^{-1-\frac{t\lambda}{\theta(\lambda-1)}}\mathrm{d}x\\ &=\int_0^\infty u^{\frac{t}{\theta}} \left(1+t(\lambda-1)u\right)^{-1- \frac{t\lambda}{\theta(\lambda-1)}} \mathrm{d}u \qquad {(x=1/u)}\\ &=\left(t(\lambda+1)\right)^{-1-\frac{t}{\theta}} \int_0^\infty v^{\frac{t}{\theta}} (1+v)^{-1- \frac{t\lambda}{\theta(\lambda-1)}} \mathrm{d}v \qquad {(v=t(\lambda-1)u)}\\ &=\left(t(\lambda+1)\right)^{-1-\frac{t}{\theta}} B\left(1+\frac{t}{\theta}, \frac{t}{\theta(\lambda-1)}\right). \end{align*} $$

By symmetry of beta functions, we obtain the desired result.

By Lemma 5.1, the Gibbs measure $\rho _{t,\theta ,\lambda }$ has the following form:

(5.1) $$ \begin{align} \rho_{t,\theta,1} (\mathrm{d} x) &=\frac{(\frac{t^2}{\theta})^{1+\frac{t}{\theta}}}{\Gamma(1+\frac{t}{\theta})} x^{-(2+ \frac{t}{\theta})} e^{-\frac{t^2}{\theta x}} \mathrm{d}x, \end{align} $$
(5.2) $$ \begin{align} \rho_{t,\theta,\lambda}(\mathrm{d} x) &=\frac{(t(\lambda-1))^{\frac{t}{\theta}+1}}{B(\frac{t}{\theta(\lambda-1)},1+\frac{t}{\theta})} x^{-1 + \frac{t}{\theta(\lambda-1)}} \left(x+t(\lambda-1)\right)^{-1-\frac{t\lambda}{\theta(\lambda-1)}}\mathrm{d}x, \quad \lambda>1. \end{align} $$

From the above representation (5.1), the measure $\rho _{t,\theta ,1}$ is the inverse gamma distribution. More precisely,

(5.3) $$ \begin{align} \rho_{t,\theta,1} = \left(\gamma_{1+\frac{t}{\theta},\frac{\theta}{t^2}}\right)^{\langle-1\rangle}, \end{align} $$

where $(\gamma _{a,b})^{\langle -1\rangle }$ is defined by

$$ \begin{align*}(\gamma_{a,b})^{\langle -1\rangle} = \frac{1}{b^a \Gamma(a)} x^{-a-1} e^{-\frac{1}{bx}} \mathrm{d} x, \qquad a,b>0. \end{align*} $$

It is known that the class $\{(\gamma _{a,b})^{\langle -1\rangle }:a,b>0\}$ includes the positive $1/2$ -classical stable law $\sqrt {\frac {c}{2\pi }} x^{-\frac {3}{2}} e^{-\frac {c}{2x}}\mathrm {d}x$ , $c>0$ (it is also called Lévy distribution), but the measure $\rho _{t,\theta ,1}$ cannot be the positive $1/2$ -classical stable law for any $t,\theta>0$ .

For $\lambda>1$ , the formula (5.2) implies that

(5.4) $$ \begin{align} \rho_{t,\theta,\lambda} &= D_{t(\lambda-1)}\left( \beta' \left(\frac{t}{\theta(\lambda-1)}, 1+\frac{t}{\theta} \right)\right) \end{align} $$
(5.5) $$ \begin{align} &=D_{t(\lambda-1)} \left( \gamma_{\frac{t}{\theta(\lambda-1)},1} \circledast (\gamma_{1+\frac{t}{\theta},1})^{\langle -1\rangle}\right), \end{align} $$

where $\beta '(a,b)$ is the beta prime distribution, that is,

$$ \begin{align*}\beta'(a,b) := \gamma_{a,1}\circledast (\gamma_{b,1})^{\langle-1\rangle} = \frac{1}{B(a,b)} x^{-1+a} (1+x)^{-a-b} \mathrm{d} x, \qquad a,b>0. \end{align*} $$

Equation (5.5) is obtained by replacing $\boxtimes $ , $\pi _{a,1,}$ and $\mu _{t,\theta ,\lambda }$ in equation (4.1) of Theorem 4.3 with $\circledast $ , $\gamma _{a,1,}$ and $\rho _{t,\theta ,\lambda }$ , respectively. Using $\gamma _{a,b} = D_b(\gamma _{a,1})$ for $a,b>0$ , and equations (5.3) and (5.5), we get

$$ \begin{align*} \rho_{t,\theta,\lambda} = \rho_{t,\theta,1} \circledast \gamma_{\frac{t}{\theta(\lambda-1)}, \frac{\theta(\lambda-1)}{t}}. \end{align*} $$

This formula can be interpreted as the classical counterpart of equation (4.2). In particular, if we put $\lambda =1+t/\theta $ , then

$$ \begin{align*}\rho_{t,\theta,1+t/\theta}=\rho_{t,\theta,1} \circledast \gamma_{1,1}. \end{align*} $$

The measure $\rho _{t,\theta , 1+t/\theta }$ belongs to the ME (mixture of exponential distributions), that is, the distribution of the form $EZ$ , where $E,Z$ are independent random variables such that E is distributed as some exponential distribution and $Z\ge 0$ (see [Reference GoldieGol67, Reference SteutelSte67] for details). According to [Reference BondessonBon92, Reference Ismail and KelkerIK79], the inverse gamma distributions and the beta prime distributions are classically self-decomposable. Thus, for any $t,\theta>0$ and $\lambda \ge 1$ , the measure $\rho _{t,\theta ,\lambda }$ is self-decomposable.

Summarizing the discussion so far, we arrive at the following theorem.

Theorem 5.2 (Convolution formula for $\rho _{t,\theta ,\lambda }$ )

Let us consider $t,\theta>0$ , $\lambda> 1$ and ${q=\frac {t}{\theta (\lambda -1)}}$ . Then, we obtain

$$ \begin{align*} \rho_{t,\theta, \lambda} &= D_{t(\lambda-1)} \left( \gamma_{q,1} \circledast (\gamma_{1+\frac{t}{\theta},1})^{\langle -1\rangle}\right) =\rho_{t,\theta,1} \circledast \gamma_{q,q^{-1}}, \end{align*} $$

and $\rho _{t,\theta ,\lambda }$ is self-decomposable. In particular, $\rho _{t,\theta , 1+t/\theta }= \rho _{t,\theta ,1} \circledast \gamma _{1,1}$ , and hence it belongs to the ME.

Remark 5.3 Let $p_{t,\theta , \lambda }$ be the density function of the measure $\rho _{t,\theta ,\lambda }$ . We obtain the following ordinary differential equations:

$$ \begin{align*} \frac{p_{t,\theta,1}'(x)}{p_{t,\theta,1}(x)}+ \frac{x-\frac{t^2}{2\theta+t}}{\frac{\theta}{2\theta+t}x^2}=0 \end{align*} $$

and

$$ \begin{align*} \frac{p_{t,\theta,\lambda}'(x)}{p_{t,\theta,\lambda}(x)} + \frac{(x+ \frac{t(\lambda-1)}{2}) - \frac{t^2(\lambda+1)}{2(2\theta+t)}}{\frac{\theta}{2\theta+t} (x+ \frac{t(\lambda-1)}{2})^2 - \frac{t^2\theta(\lambda-1)^2}{4(2\theta+t)}} = 0, \qquad \lambda>1. \end{align*} $$

Thus, we see that the family $\{\rho _{t,\theta ,\lambda }: t,\theta>0, \lambda \ge 1\} \subset {\mathcal {P}}(\mathbb {R}_{\ge 0})$ coincides with a subfamily of Pearson distributions (see [Reference PearsonP1895, p. 381]).

5.2 Equilibrium measure of free entropy

In this section, we investigate the maximizer of free entropy associated with the potential $V_{t,\theta ,\lambda }$ .

Theorem 5.4 (Maximizer of free entropy)

For each $t,\theta>0$ and $1 \le \lambda <1+t/\theta $ , the measure $\mu _{t,\theta ,\lambda }$ is a unique maximizer of

$$ \begin{align*}\Sigma_{V_{t,\theta,\lambda}(}\mu):= \iint_{\mathbb{R}_{>0}\times \mathbb{R}_{>0}} \log |x-y|\mu(\mathrm{d} x) \mu(\mathrm{d} y) -\int_{\mathbb{R}_{>0}} V_{t,\theta,\lambda}(x)\mu(\mathrm{d} x), \end{align*} $$

among all $\mu \in {\mathcal {P}}(\mathbb {R}_{>0})$ .

Proof Thanks to the theory of the energy problem in [Reference JohanssonJoh98], the existence and uniqueness for the maximizer $\mu _V$ of $\Sigma _{V_{t,\theta ,\lambda }}$ are guaranteed. The rest of the proof is to conclude that $\mu _V=\mu _{t,\theta ,\lambda }$ . Since $V_{t,\theta ,\lambda }$ is regular enough (see [Reference Féral, Donati-Martin, Émery, Rouault and StrickerFer08, Section 4.2] and [Reference FèralFer06, Section 2]), $\mu _V$ has the density $\Phi _V=\frac {\mathrm {d}\mu _V}{\mathrm {d} x}$ and is compactly supported on $\mathbb {R}_{>0}$ , denoted by $[a,b]$ for some $0<a<b$ . We divide two cases for $\lambda $ as follows.

Case of $\boldsymbol {\lambda =1}$ : By definition of $V_{t,\theta ,1}$ , we can apply [Reference FèralFer06, Theorem 1] in the case of $\lambda =-1-t/\theta $ , $\alpha =0,$ and $\beta =t^2/\theta $ to the points $a,b$ and the density $\Phi _V$ . Then, $0<a<b$ satisfy

$$ \begin{align*}2+\frac{t}{\theta}-\frac{t^2}{\theta}\cdot \frac{a+b}{2ab}=0 \quad \text{and} \quad \sqrt{ab}=t. \end{align*} $$

Thus, we have

$$ \begin{align*} a= 2\theta+t -2\sqrt{\theta(\theta+t)} = \alpha^- \quad \text{and} \quad b=2\theta+t + 2\sqrt{\theta(\theta+t)} = \alpha^+, \end{align*} $$

where $\alpha ^\pm $ is defined in (3.2) for $\lambda =1$ . Moreover,

$$ \begin{align*} \Phi_V(x) &=\frac{1}{2\pi} \sqrt{(x-a)(b-x)} \cdot \frac{\beta}{\sqrt{ab}x^2} \mathbf{1}_{[a,b]}(x)\\ &= \frac{t\sqrt{(x-\alpha^-)(\alpha^+-x)} }{2\pi \theta x^2} \mathbf{1}_{[\alpha^-,\alpha^+]}(x) = \frac{\mathrm{d}\mu_{t,\theta,1}}{\mathrm{d} x}(x) , \end{align*} $$

as desired.

Case of $\boldsymbol {1<\lambda < 1+t/\theta }$ : By [Reference Saff and TotikST97, Theorem IV. 1.11], the points a and b satisfy the following singular integral equations:

(5.6) $$ \begin{align} \frac{1}{\pi} \int_a^b \frac{V_{t,\theta,\lambda}'(x)}{\sqrt{(b-x)(x-a)}} \mathrm{d} x=0 \quad \text{and} \quad \frac{1}{\pi} \int_a^b \frac{xV_{t,\theta,\lambda}'(x)}{\sqrt{(b-x)(x-a)}} \mathrm{d} x=2. \end{align} $$

By using the formulas

$$ \begin{align*}\int_a^b \frac{1}{x \sqrt{(b-x)(x-a)} }\mathrm{d} x = \frac{\pi}{\sqrt{ab}}\quad \text{and} \quad \int_a^b \frac{1}{\sqrt{(b-x)(x-a)} }\mathrm{d} x =\pi, \end{align*} $$

we obtain

$$ \begin{align*} \frac{1}{\pi} &\int_a^b \frac{V_{t,\theta,\lambda}'(x)}{\sqrt{(b-x)(x-a)}} \mathrm{d} x \\ &= \frac{1}{\pi} \int_a^b \frac{1-\frac{t}{\theta(\lambda-1)}}{x \sqrt{(b-x)(x-a)}}\mathrm{d} x + \frac{1}{\pi} \int_a^b \frac{1+\frac{t\lambda}{\theta(\lambda-1)}}{(x+t(\lambda-1) )\sqrt{(b-x)(x-a)}}\mathrm{d} x \\ &= \left(1-\frac{t}{\theta(\lambda-1)} \right) \frac{1}{\sqrt{ab}} + \left(1+\frac{t\lambda}{\theta(\lambda-1)}\right) \frac{1}{\sqrt{(a+t(\lambda-1))(b+t(\lambda-1))}}. \end{align*} $$

Moreover, we have

$$ \begin{align*} \frac{1}{\pi} & \int_a^b \frac{xV_{t,\theta,\lambda}'(x)}{\sqrt{(b-x)(x-a)}} \mathrm{d} x\\ &= \frac{1}{\pi} \int_a^b \frac{1-\frac{t}{\theta(\lambda-1)}}{\sqrt{(b-x)(x-a)}}\mathrm{d} x + \frac{1}{\pi} \int_a^b \frac{\left(1+\frac{t\lambda}{\theta(\lambda-1)}\right)x}{(x+t(\lambda-1) )\sqrt{(b-x)(x-a)}}\mathrm{d} x \\ &= \left(1-\frac{t}{\theta(\lambda-1)}\right) +\left(1+\frac{t\lambda}{\theta(\lambda-1)}\right)\left\{1- \frac{t(\lambda-1)}{\sqrt{(a+t(\lambda-1))(b+t(\lambda-1))}}\right\}\\ &=2+\frac{t}{\theta} -\left(1+\frac{t\lambda}{\theta(\lambda-1)}\right)\frac{t(\lambda-1)}{\sqrt{(a+t(\lambda-1))(b+t(\lambda-1))}}. \end{align*} $$

Therefore, the equations (5.6) imply that

(5.7) $$ \begin{align} \begin{cases} \left(1+\dfrac{t\lambda}{\theta(\lambda-1)}\right) \dfrac{1}{\sqrt{(a+t(\lambda-1))(b+t(\lambda-1))}} = \left(\dfrac{t}{\theta(\lambda-1)} -1\right) \dfrac{1}{\sqrt{ab}} \\ \left(1+\dfrac{t\lambda}{\theta(\lambda-1)}\right)\dfrac{1}{\sqrt{(a+t(\lambda-1))(b+t(\lambda-1))}} = \dfrac{1}{\theta(\lambda-1)}. \end{cases} \end{align} $$

By solving the above equations, we have

(5.8) $$ \begin{align} a+b = 2(\theta(\lambda+1)+t)\quad \text{and} \quad ab = (\theta(\lambda-1)-t)^2. \end{align} $$

This implies that $a,b$ are the solution of $(z-\alpha ^+)(z-\alpha ^-)=0$ , and hence $a= \alpha ^-$ and $b=\alpha ^+$ .

By using the formula

$$ \begin{align*}\text{p.v}\left(\frac{1}{\pi} \int_a^b \frac{1}{u\sqrt{(u-a)(b-u)}} \frac{{\mathrm{d} u}}{u-x} \right) =-\frac{1}{\sqrt{ab}x}, \end{align*} $$

we get

$$ \begin{align*} \text{p.v.}& \left(\frac{1}{\pi}\int_{\alpha^-}^{\alpha^+} \frac{V_{t,\theta,\lambda}'(u)}{\sqrt{(u-\alpha^-)(\alpha^+-u)}} \frac{\mathrm{ d} u}{u-x}\right)\\ &=\left(1-\frac{t}{\theta(\lambda-1)}\right)\text{p.v.} \left(\frac{1}{\pi}\int_{\alpha^-}^{\alpha^+} \frac{1}{u\sqrt{(u-\alpha^-)(\alpha^+-u)}} \frac{\mathrm{d} u}{u-x}\right)\\ &\hspace{6mm}+\left(1+\frac{t\lambda}{\theta(\lambda-1)}\right) \text{p.v.} \left(\frac{1}{\pi}\int_{\alpha^-}^{\alpha^+} \frac{1}{(u+t(\lambda-1))\sqrt{(u-\alpha^-)(\alpha^+-u)}} \frac{\mathrm{d} u}{u-x}\right)\\ &=-\left(1-\frac{t}{\theta(\lambda-1)}\right)\frac{1}{\sqrt{\alpha^+\alpha^-} x}\\ &\hspace{6mm} -\left(1+\frac{t\lambda}{\theta(\lambda-1)}\right) \frac{1}{\sqrt{(\alpha^-+t(\lambda-1))(\alpha^++t(\lambda-1))}}\cdot \frac{1}{x+t(\lambda-1)}\\ &=\frac{1}{\theta(\lambda-1)x} -\frac{1}{\theta(\lambda-1)}\cdot\frac{1}{x+t(\lambda-1)} \qquad \text{(by }({5.7})\text{ and }({5.8}))\\ &=\frac{t}{\theta x (x+t(\lambda-1))}. \end{align*} $$

From [Reference Saff and TotikST97, Theorem IV. 3.1], we finally obtain

$$ \begin{align*} \Phi_V(x)&=\frac{1}{2\pi} \sqrt{(x-\alpha^-)(\alpha^+ -x)} \times \text{p.v.} \left( \frac{1}{\pi} \int_{\alpha^-}^{\alpha^+} \frac{V_{t,\theta,\lambda}'(u)}{\sqrt{(u-\alpha^-)(\alpha^+-u)}} \frac{\mathrm{d} u}{u-x}\right)\\ &= \frac{t\sqrt{(x-\alpha^-)(\alpha^+ -x)} }{2\pi \theta x (x+t(\lambda-1))} = \frac{\mathrm{d} \mu_{t,\theta,\lambda}}{\mathrm{d} x}(x), \end{align*} $$

as desired.

Remark 5.5 From the proof of the above theorem, we observe that the class $\{\mu _{t,\theta ,1}:t,\theta>0\}$ forms a special subclass of the free GIG distributions (see [Reference FèralFer06, Reference Hasebe and SzpojankowskiHS19]).

Due to Theorem 5.4, the potential correspondence maps the measure $\rho _{t,\theta ,\lambda }$ to the measure $\mu _{t,\theta ,\lambda }$ for all $t,\theta>0$ and $1\le \lambda <1+t/\theta $ . In particular, the potential correspondence maps the beta prime distributions $\beta '(a,b)$ to the free ones $f\beta '(a,b)$ for all $a,b>1$ , due to Proposition 4.5 and (5.4). Consequently, we obtain the following result for free beta prime distributions.

Corollary 5.6 Let us consider $a,b>1$ . The measure $f\beta '(a,b)$ is the unique maximizer of the free entropy $\Sigma _{V_{a,b}}(\mu )$ among probability measures $\mu $ on $\mathbb {R}_{>0}$ , where

$$ \begin{align*}V_{a,b}(x) = (1-a) \log x + (a+b) \log(1+x), \qquad x>0. \end{align*} $$

6 Meixner-type free beta–gamma algebra

In classical probability, there are many algebraic relations between gamma and beta random variables, so called beta–gamma algebra (see, e.g., [Reference Ferreira and SimonFS23]). A purpose of this section is to study algebraic relations between free beta and free gamma random variables in the sense of Meixner-type.

According to Section 1, if $G_1^{(p)} \sim \eta _{p,1}$ and $G_2^{(q)}\sim \eta _{q,1}$ are free, then

$$ \begin{align*}G_1^{(p)} + G_2^{(q)} \overset{\mathrm{d}}{=} G_3^{(p+q)}, \qquad p,q>0 \end{align*} $$

for some $G_3^{(p+q)}\sim \eta _{p+q,1}$ . We call a positive operator $G\sim \eta _{p,1} (p>0)$ a Meixner-type free gamma random variable. Recall that $\eta _{p,1}=\mu _{p,1,1}$ for all $p>0$ .

First, we investigate the reversed measure of $\mu _{t,\theta ,\lambda }$ as follows. Since $\mu _{t,\theta ,\lambda }(\{0\})=0$ for $1\le \lambda \le 1+t/\theta $ , we can define the reversed measure $(\mu _{t,\theta ,\lambda })^{\langle -1\rangle }$ in this case. We have already obtained the measure $(\mu _{t,\theta ,1})^{\langle -1\rangle }$ by Lemma 4.2.

Lemma 6.1 For $t,\theta>0$ and $1< \lambda \le 1+\frac {t}{\theta }$ , we have

$$ \begin{align*}(\mu_{t,\theta,\lambda})^{\langle -1\rangle} =D_{(t(\lambda-1))^{-1}}( \mu_{t', \theta', \lambda'} ), \end{align*} $$

where

$$ \begin{align*} t' = \frac{(\theta+t)(\lambda-1)}{t-\theta(\lambda-1)}, \quad \theta' = \frac{\theta(\theta+t)(\lambda-1)^2}{ (t-\theta(\lambda-1))^2} \quad \text{and} \quad \lambda'=\frac{t\lambda}{(\theta+t)(\lambda-1)}\ge1. \end{align*} $$

Proof By Theorem 4.3 and Proposition 4.5, we obtain

$$ \begin{align*} (\mu_{t,\theta,\lambda})^{\langle-1\rangle} &= D_{(t(\lambda-1))^{-1}} \left( (\pi_{\frac{t}{\theta(\lambda-1)},1})^{\langle-1\rangle} \boxtimes \pi_{1+\frac{t}{\theta},1}\right) \\ &=D_{(t(\lambda-1))^{-1}}\left(f\beta' \left( 1+\frac{t}{\theta}, \frac{t}{\theta(\lambda-1)} \right)\right)=D_{(t(\lambda-1))^{-1}} (\mu_{t',\theta',\lambda'}). \end{align*} $$

The free infinite divisibility for the reversed measure of $\mu _{t,\theta ,\lambda }$ follows from Section 3 and Lemmas 4.2 and 6.1.

Corollary 6.2 Let us consider $t,\theta>0$ and $1\le \lambda \le 1+t/\theta $ . Then, the following properties hold:

  1. (1) $(\mu _{t,\theta ,\lambda })^{\langle -1\rangle }$ is freely infinitely divisible.

  2. (2) $(\mu _{t,\theta , \lambda })^{\langle -1 \rangle }$ is freely self-decomposable if and only if $\lambda =1+t/\theta $ .

  3. (3) $(\mu _{t,\theta ,\lambda })^{\langle -1\rangle }$ is unimodal.

We recall that, if $\Gamma _p \sim \gamma _{p,1}$ and $\Gamma _q \sim \gamma _{q,1}$ are classically independent, then $\frac {\Gamma _p}{\Gamma _q}$ is distributed as beta prime distribution $\beta '(p,q)= \frac {x^{p-1} (1+x)^{-p-q}}{B(p,q)} \mathrm {d} x$ for $p,q>0$ (see [Reference Balakrishnan, Johnson and KotzBJK95, Chapter 27]). By analogy, we investigate free beta-prime random variables in the sense of Meixner-type, that is, the product of $G_1^{(p)}$ and $(G_2^{(q)})^{-1}$ , where $G_1^{(p)}\sim \eta _{p,1}$ and $G_2^{(q)}\sim \eta _{q,1}$ are free.

Proposition 6.3 Given $p,q>0$ , we assume that $G_1^{(p)}\sim \eta _{p,1}$ and $G_2^{(q)}\sim \eta _{q,1}$ are free in a $C^\ast $ -probability space $(\mathcal {A},\varphi )$ . Then,

$$ \begin{align*}(G_2^{(q)})^{-\frac{1}{2}} G_1^{(p)} (G_2^{(q)})^{-\frac{1}{2}} \sim \eta_{p,1}\boxtimes (\eta_{q,1})^{\langle-1 \rangle} = D_{\frac{1+q}{q^2}}\left( \mu_{p,1,1+\frac{p}{1+q}}\right). \end{align*} $$

Proof Since $\sigma (G_2^{(q)}) = [2+q-2\sqrt {q+1}, 2+q+2\sqrt {q+1}]$ and $f(x)=1/x$ is a bounded continuous function on $\sigma (G_2^{(q)})$ , two random variables $G_1^{(p)}$ and $(G_2^{(q)})^{-1}=f(G_2^{(q)}) \in \mathcal {A}$ are also free by the Stone–Weierstrass theorem. We observe

$$ \begin{align*} \eta_{p,1}\boxtimes (\eta_{q,1})^{\langle-1 \rangle} &= \mu_{p,1,1} \boxtimes \pi_{1+q,q^{-2}} \qquad \text{(by Lemma }{4.2})\\ &=D_{\frac{1+q}{q^2}} \left(\mu_{p,1,1} \boxtimes \pi_{1+q,\frac{1}{1+q}} \right)\\ &=D_{\frac{1+q}{q^2}} \left(\mu_{p,1,1+\frac{p}{1+q} }\right) \qquad \text{(by Theorem }{4.3}).\\[-40pt] \end{align*} $$

Finally, we investigate the sum of freely independent and identically distributed reciprocal Meixner-type free gamma random variable.

Theorem 6.4 Given $p>0$ and $n\in \mathbb {N}$ , let us consider freely independent random variables $G_1^{(p)},G_2^{(p)}, \dots , G_{2^n}^{(p)} \sim \eta _{p,1}$ in some $C^*$ -probability space $(\mathcal {A},\varphi )$ . Then,

$$ \begin{align*}\left(\frac{1}{G_1^{(p)}} + \frac{1}{G_2^{(p)}}+\cdots + \frac{1}{G_{2^n}^{(p)}} \right)^{-1} \overset{\mathrm{d}}{=} \left(2^n+\frac{2^n-1}{p}\right)^{-2}G^{(2^np+2^n-1)}, \end{align*} $$

for some $G^{(2^np+2^n-1)} \sim \eta _{2^np+2^n-1,1}$ .

Proof By an argument similar to that in Proposition 6.3, $(G_1^{(p)})^{-1}, (G_2^{(p)})^{-1}, \dots , (G_{2^n}^{(p)})^{-1}\in \mathcal {A}$ are also free. Denote by $\tau _p:=(\eta _{p,1})^{\langle -1\rangle } = \pi _{1+p,p^{-2}}$ for $p>0$ . Then, the distribution of the LHS coincides with $(\tau _p^{\boxplus 2^n})^{\langle -1\rangle }$ . Finally, we should prove that

(6.1) $$ \begin{align} \tau_p^{\boxplus 2^n} = D_{(2^n + \frac{2^n-1}{p})^2} (\tau_{2^np+2^n-1}). \end{align} $$

For $n=1$ , we get

$$ \begin{align*}\tau_p^{\boxplus 2} = \pi_{1+p,p^{-2}}^{\boxplus 2}=D_{p^{-2}}( \pi_{2+2p,1}) = D_{(1+2p)^2p^{-2}} (\pi_{1+(1+2p),(1+2p)^{-2}})= D_{(2+p^{-1})^2} (\tau_{1+2p}). \end{align*} $$

We assume that (6.1) holds true in the case when $n=k$ . Then,

$$ \begin{align*} \tau_p^{\boxplus 2^{k+1}} &= (\tau_p^{\boxplus 2^k})^{\boxplus 2} = D_{(2^k + \frac{2^k-1}{p})^2} (\tau_{2^kp+2^k-1})^{\boxplus 2}\\ &=D_{(2^k + \frac{2^k-1}{p})^2} D_{(2+ \frac{1}{2^k p +2^k-1})^2} (\tau_{1+2(2^kp +2^k-1)})\\ &=D_{(2^{k+1} + \frac{2^{k+1}-1}{p})^2} (\tau_{2^{k+1}p + 2^{k+1}-1}). \end{align*} $$

By induction, we obtain the desired formula (6.1).

Corollary 6.5 Given $m \ge 2$ , we consider free copies $\{ G_1^{((2(m-1))^{-1})}, G_2^{((2(m-1))^{-1})}\}$ from $\eta _{(2(m-1))^{-1},1}$ and free copies $\{G_1^{((m-1)^{-1})}, \dots , G_m^{((m-1)^{-1})}\}$ from $\eta _{(m-1)^{-1},1}$ . Then,

$$ \begin{align*}\left( \frac{1}{G_1^{((2(m-1))^{-1})}} + \frac{1}{G_2^{((2(m-1))^{-1})}} \right)^{-1} \overset{\mathrm{d}}{=} \frac{1}{4m^2} (G_1^{((m-1)^{-1})}+ \cdots + G_m^{((m-1)^{-1})}). \end{align*} $$

Proof By putting $p=\frac {1}{2(m-1)}$ and $n=1$ in Proposition 6.4, we get

$$ \begin{align*} \left( \frac{1}{G_1^{(p)}} + \frac{1}{G_2^{(p)}} \right)^{-1} &\sim D_{\frac{p^2}{(2p+1)^2}} \eta_{2p+1,1}\\ &= D_{\frac{1}{4}\cdot \frac{(2p)^2}{(2p+1)^2}} \eta_{2p,1}^{\boxplus \frac{2p+1}{2p}} =D_{\frac{1}{4m^2}} \eta_{\frac{1}{m-1}, 1}^{\boxplus m}, \end{align*} $$

as desired.

In classical probability theory, it is known that, if $\Gamma _p \sim \gamma _{p,1}$ and $\Gamma _q \sim \gamma _{q,1}$ are independent, then the random variable

$$ \begin{align*}\left(1+\frac{\Gamma_p}{\Gamma_q} \right)^{-1} = \frac{\Gamma_p^{-1}}{\Gamma_p^{-1}+\Gamma_q^{-1}} \end{align*} $$

is distributed as a beta distribution $\beta (p,q)$ for $p,q>0$ . Below, we study free beta random variables in the sense of Meixner-type.

Theorem 6.6 Given $p>0$ , let us set free random variables $G_1^{(p)}, G_2^{(p)} \sim \eta _{p,1}$ in $C^\ast $ -probability space $(\mathcal {A},\varphi )$ . Define

$$ \begin{align*}B^{(p)} := \{(G_1^{(p)})^{-1} + (G_2^{(p)})^{-1} \}^{-\frac{1}{2}} (G_1^{(p)})^{-1} \{(G_1^{(p)})^{-1} + (G_2^{(p)})^{-1} \}^{-\frac{1}{2}} \in \mathcal{A}, \end{align*} $$

and $\mu _p := \mathcal {L}(B^{(p)})$ . Then, the following assertions hold:

  1. (1) Its S-transform is given by

    $$ \begin{align*}S_{\mu_p}(z)=\frac{p^4}{(1+p+z)(2p+1-z)} \end{align*} $$
    for z in a neighborhood of $(-1,0)$ .
  2. (2) Its R-transform is given by

    $$ \begin{align*}R_{\mu_p}(z)= \frac{p(z-p^3)-\sqrt{(3p+2)^2z^2-2p^5z+p^8}}{2z}, \qquad z\in \left(-\frac{p^3}{2(p+1)}, 0\right). \end{align*} $$
  3. (3) The measure $\mu _p$ is not freely infinitely divisible for any $p>0$ .

Proof The existence of the measure $\mu _p$ follows from Riesz–Markov–Kakutani’s theorem. Note that $(G_1^{(p)})^{-1},(G_2^{(p)})^{-1} \sim \pi _{1+p,p^{-2}}$ are free in $(\mathcal {A},\varphi )$ . Hence, $(G_1^{(p)})^{-1}+(G_2^{(p)})^{-1}$ and $B^{(p)}$ are also free in $(\mathcal {A},\varphi )$ by free Lukacs property (see [Reference SzpojankowskiSzp15]). Since

$$ \begin{align*}(G_1^{(p)})^{-1} = \{(G_1^{(p)})^{-1}+(G_2^{(p)})^{-1}\}^{\frac{1}{2}} B^{(p)} \{(G_1^{(p)})^{-1}+(G_2^{(p)})^{-1}\}^{\frac{1}{2}}, \end{align*} $$

we have

$$ \begin{align*}\pi_{1+p,p^{-2}} = \mu_p \boxtimes D_{\frac{p^2}{(2p+1)^2}} \eta_{2p+1,1} \end{align*} $$

by Theorem 6.4. Therefore, we get

$$ \begin{align*} S_{\mu_p}(z) &= \frac{S_{\pi_{1+p,p^{-2}}}(z)}{S_{D_{p^2(2p+1)^{-2}} (\eta_{2p+1,1})}(z)}\\ &=\frac{p^2}{1+p+z} \cdot \frac{p^2}{2p+1-z}=\frac{p^4}{(1+p+z)(2p+1-z)}. \end{align*} $$

Next, since $z\mapsto z S_{\mu _p}(z)$ is strictly increasing on $(-1,0)$ for any $p>0$ , we have $R_{\mu _p}^{\langle -1\rangle }(z)= z S_{\mu _p}(z)$ . Hence, we can compute the R-transform of $\mu _p$ by using the relation.

The complex equation $(3p+2)^2z^2 -2p^5 z+p^8=0$ has distinct roots

$$ \begin{align*}p^4 \cdot \frac{p-2 \sqrt{(p+1)(2p+1)} \ i}{(3p+2)^2} \quad \text{and} \quad p^4 \cdot \frac{p+2 \sqrt{(p+1)(2p+1)}\ i}{(3p+2)^2}, \end{align*} $$

where $i=\sqrt {-1}$ . This means that $\sqrt {(3p+2)^2z^2 -2p^5 z+p^8}$ has a pole at $z\in \mathbb {C}^-$ . Hence, $R_{\mu _p}$ does not have an analytic continuation to $\mathbb {C}^-$ . The measure $\mu _p$ is not freely infinitely divisible.

7 Asymptotic roots of polynomials related to free gamma distributions

In this section, we briefly discuss the connection between orthogonal (Jacobi/Bessel) polynomials and the measure $\mu _{t,\theta ,\lambda }$ via finite free probability, as developed in [Reference MarcusMar21, Reference Marcus, Spielman and SrivastavaMSS22]. We begin by introducing the notation that will be used throughout this section.

  • For a polynomial p of degree d, we denote by $\widetilde {e}_k^{(d)}(p)$ the normalized k-th elementary symmetric polynomial in the d roots $\lambda _1(p),\dots , \lambda _d(p)$ of p. Explicitly,

    $$ \begin{align*}\widetilde{e}_k^{(d)}(p) : = \binom{d}{k}^{-1} \sum_{1\le i_1< \cdots < i_k \le d} \lambda_{i_1}(p)\dots \lambda_{i_k}(p), \quad k=1,\dots, d. \end{align*} $$

    Then, we can represent any polynomials p of degree d as

    $$ \begin{align*}p(x) = \prod_{i=1}^d (x-\lambda_i(p))=\sum_{k=0}^d (-1)^k \binom{d}{k} \widetilde{e}_k^{(d)}(p) x^{d-k}. \end{align*} $$
  • (Dilation) For $c\neq 0$ and a polynomial p of degree d, we define

    $$ \begin{align*}(D_c(p))(x):= c^d p\left(\frac{x}{c}\right). \end{align*} $$

    Then, one can see that $\widetilde {e}_k^{(d)}(D_c(p))= c^k\ \widetilde {e}_k^{(d)}(p)$ for $k=1,\dots , d$ .

  • (Empirical root measure) For a polynomial p of degree d, we define the probability measure

    $$ \begin{align*}\mathfrak{m}[[ p]] :=\frac{1}{d}\sum_{p(x)=0} \delta_x. \end{align*} $$

    The measure is called the empirical root measure of p. If p is real-rooted (resp., positive real-rooted), then $\mathfrak {m}[[p]] \in {\mathcal {P}}(\mathbb {R})$ (resp., $\mathfrak {m}[[p]] \in {\mathcal {P}}(\mathbb {R}_{>0})$ ).

7.1 Jacobi polynomial

For $a \in \mathbb {R}\setminus \{i/d: i=1,\dots , d-1\}$ and $b\in \mathbb {R}$ , we denote by $J^{(a,b)}_d$ the monic polynomial of degree d, in which its coefficient is given by

$$ \begin{align*}\widetilde{e}_k^{(d)}\left(J^{(a,b)}_d \right) := \frac{(bd)_k}{(ad)_k} \qquad \text{for} \ k=0, 1,\dots,d, \end{align*} $$

where $(x)_n:=x(x-1)\dots (x-n+1)$ and $(x)_0:=1$ . The polynomials $J^{(a,b)}_d$ are well-known as the Jacobi polynomials. According to [Reference Martínez-Finkelshtein, Morales and PeralesMFMP24b, (80)], the polynomial $J^{(a,b)}_d$ can be represented by a hypergeometric function as follows:

$$ \begin{align*}J_d^{(a,b)}(x)=(-1)^d\frac{ (bd)_d}{(ad)_d} {}_2F_1 (-d, ad-d+1; bd-d+1; x). \end{align*} $$

It is known that the polynomial is orthogonal with respect to the weight function

$$ \begin{align*}W_{d}^{(a,b)}(x):=x^{d(b-1)}(1-x)^{d(a-b-1)} \end{align*} $$

when $-bd+d-1 \notin \mathbb {Z}_{\ge 0}$ (see, e.g., [Reference Dominici, Johnston and JordaanDJJ13, Theorem 1]). Recently, the class of hypergeometric polynomials (including $J^{(a,b)}_d$ ) was studied in the framework of finite free probability theory (see, e.g., [Reference Martínez-Finkelshtein, Morales and PeralesMFMP24a, Reference Martínez-Finkelshtein, Morales and PeralesMFMP24b, Reference Arizmendi, Fujie, Perales and UedaAFPU24]).

In particular, we define

$$ \begin{align*}\widehat{J}^{(a,b)}_d(x):= J^{(a,b)}_d(-x). \end{align*} $$

By [Reference Martínez-Finkelshtein, Morales and PeralesMFMP24b, Section 5.2] and [Reference Dominici, Johnston and JordaanDJJ13, Proposition 4 and Theorem 5], the polynomial $\widehat {J}^{(a,b)}_d$ has d distinct roots in which are all nonnegative when $a<0$ and $b>1$ . Furthermore, we define the monic real-rooted polynomial of degree d by

$$ \begin{align*}p_d^{(t,\theta,\lambda)}:= D_{t(\lambda-1)} (\widehat{J}^{\ (A, B) }_d), \qquad \text{where} \quad A=-\frac{t}{\theta} \quad \text{and} \quad B=\frac{t}{\theta(\lambda-1)} +\frac{1}{d}, \end{align*} $$

for $t,\theta>0$ and $1<\lambda \le 1+t/\theta $ . In this case, it is easy to check $A<0$ and $B>1$ , and therefore $p_d^{(t,\theta ,\lambda )}$ also has d distinct roots in which are all nonnegative. Moreover, $p_d^{(t,\theta ,\lambda )}$ is orthogonal with respect to the weight function

(7.1) $$ \begin{align} W_d^{(A,B)}\left(-\frac{x}{t(\lambda-1)}\right) \propto x^{d(B-1)}(x+t(\lambda-1))^{d(A-B-1)}. \end{align} $$

According to recent work [Reference Arizmendi, Fujie, Perales and UedaAFPU24], the ratio of consecutive coefficients of a polynomial plays a role in the S-transform in the framework of finite free probability. In what follows, we apply the results of [Reference Arizmendi, Fujie, Perales and UedaAFPU24] to the sequence of polynomials $(p_d^{(t,\theta ,\lambda )})_{d\in \mathbb {N}}$ . A direct computation shows that

$$ \begin{align*}\frac{\widetilde{e}_{k-1}^{(d)}(p_d^{(t,\theta,\lambda)})}{\widetilde{e}_{k}^{(d)}(p_d^{(t,\theta,\lambda)})} = \frac{1}{t(\lambda-1)} \frac{\frac{t}{\theta}+\frac{k-1}{d}}{\frac{t}{\theta(\lambda-1)}-\frac{k-2}{d}}. \end{align*} $$

As $d\to \infty $ with $k/d\to z\in (0, 1)$ , the limiting value of the above consecutive coefficient ratio can be computed as follows:

$$ \begin{align*} \frac{\widetilde{e}_{k-1}^{(d)}(p_d^{(t,\theta,\lambda)})}{\widetilde{e}_{k}^{(d)}(p_d^{(t,\theta,\lambda)})} &\to \frac{1}{t(\lambda-1)} \frac{\frac{t}{\theta}+z}{\frac{t}{\theta(\lambda-1)}-z} \\ &= \frac{1}{t(\lambda-1)} \frac{(1+\frac{t}{\theta}) -1 - (-z)}{\frac{t}{\theta(\lambda-1)} +(-z)} \\ &= S_{D_{t(\lambda-1)} (f\beta'(\frac{t}{\theta(\lambda-1)}, 1+\frac{t}{\theta}))}(-z) \qquad \text{(by }({2.2})\text{ and }({4.4}))\\ &= S_{\mu_{t,\theta,\lambda}}(-z) \qquad \text{(by }{4.3}). \end{align*} $$

By [Reference Arizmendi, Fujie, Perales and UedaAFPU24, Theorem 1.1], the limiting behavior of the empirical root measure of $ p_d^{(t,\theta ,\lambda )}$ can be described as follows.

Proposition 7.1 For $t,\theta>0$ and $1<\lambda \le 1+t/\theta $ , we have

$$ \begin{align*}\mathfrak{m}[[p_d^{(t,\theta,\lambda)}]] \xrightarrow{w} \mu_{t,\theta,\lambda} \qquad \text{as} \qquad d\to\infty. \end{align*} $$

7.2 Bessel polynomial

For $a\in \mathbb {R} \setminus \{i/d: i=1, \dots , d-1\}$ , we define $B^{(a)}_d$ as the monic polynomial of degree d, whose coefficients are given by

$$ \begin{align*}\widetilde{e}_k^{(d)}\left(B^{(a)}_d \right) = \frac{d^k}{(a d)_k} \qquad \text{for} \ k=0,1,\dots, d. \end{align*} $$

The polynomials $B^{(a)}_d$ are well-known as the Bessel polynomials. According to [Reference Martínez-Finkelshtein, Morales and PeralesMFMP24b, Equation (80)], the polynomial $B^{(a)}_d$ can be represented by a hypergeometric polynomial as follows:

$$ \begin{align*}B^{(a)}_d(x) = \frac{(-1)^d}{(ad)_d} {}_2 F_0 (-d, a d-d+1; - \; x). \end{align*} $$

We define

$$ \begin{align*}\widehat{B}_d^{(a)}(x):= B_d^{(a)}(-x). \end{align*} $$

According to [Reference Martínez-Finkelshtein, Morales and PeralesMFMP24b], $\widehat {B}_d^{(a)}$ has d distinct roots in which are all nonnegative whenever $a<0$ . Furthermore, we define the monic real-rooted polynomial of degree d by

$$ \begin{align*}p_d^{(t,\theta,1)} : = D_{t^2/\theta}\left(\widehat{B}_d^{(-t/\theta)}\right), \qquad t,\theta>0. \end{align*} $$

Since $-t/\theta <0$ , the polynomial $p_d^{(t,\theta ,1)}$ also has d distinct roots in which are all nonnegative. Then, we obtain

$$ \begin{align*} \frac{\widetilde{e}_{k-1}^{(d)}(p_d^{(t,\theta,1)})}{\widetilde{e}_{k}^{(d)}(p_d^{(t,\theta,1)})} = \frac{1}{t^2}\left(t +\theta\cdot \frac{k-1}{d}\right) \to \frac{1}{t^2} (t+\theta z) = S_{\mu_{t,\theta,1}}(-z) \end{align*} $$

as $k/d\to z \in (0,1)$ . According to [Reference Arizmendi, Fujie, Perales and UedaAFPU24], we obtain the following result.

Proposition 7.2 For $t,\theta>0$ , we get

$$ \begin{align*}\mathfrak{m}[[p_d^{(t,\theta,1)}]] \xrightarrow{w} \mu_{t,\theta,1} \qquad \text{as} \qquad d\to \infty. \end{align*} $$

7.3 Finite free version of convolution formula

In this section, we construct the finite analog of Theorem 4.3. To begin, we introduce a finite version of the free multiplicative convolution. For monic polynomials $p,q$ of degree d, their finite free multiplicative convolution $p\boxtimes _d q$ is defined as the monic polynomial whose coefficients satisfy

$$ \begin{align*}\widetilde{e}_k^{(d)} (p\boxtimes_d q) := \widetilde{e}_k^{(d)}(p)\widetilde{e}_k^{(d)}(q), \qquad k=1,\dots, d. \end{align*} $$

It is known that if $p,q$ are nonnegative real rooted, then so is $p\boxtimes _d q$ . For the relationship between $\boxtimes $ and $\boxtimes _d$ , see [Reference Arizmendi, Garza-Vargas and PeralesAGP23, Reference Marcus, Spielman and SrivastavaMSS22].

Given $\lambda>0$ and $\theta>0$ , define the Laguerre polynomial $L_d^{(\lambda ,\theta )}$ to be the monic polynomial whose coefficients are

$$ \begin{align*}\widetilde{e}_k^{(d)}\left(L^{(\lambda,\theta)}_d \right) := \theta^k \frac{(\lambda d)_k}{d^k}, \qquad k=1,\dots, d. \end{align*} $$

The polynomial $L_d^{(\lambda ,\theta )}$ has d distinct, strictly positive roots whenever $\lambda>0$ . Moreover, it is known that $\mathfrak {m}[[L_d^{(\lambda ,\theta )}]]\xrightarrow {w} \pi _{\lambda ,\theta }$ as $d\to \infty $ .

Combining the above observations, we arrive at the following formula.

Proposition 7.3 For $t, \theta>0$ , $\lambda>1,$ and $d\in \mathbb {N}$ , we have

$$ \begin{align*}p_d^{(t,\theta,\lambda)} = p_d^{(t,\theta,1)} \boxtimes_d L_d^{(q+ 1/d,\ q^{-1})}, \end{align*} $$

where $q= \frac {t}{\theta (\lambda -1)}$ .

Proof We compare the k-th coefficients of $p_d^{(t,\theta ,\lambda )}$ and $ p_d^{(t,\theta ,1)} \boxtimes _d L_d^{(q+ 1/d,\ q^{-1})}$ . First, we have

$$ \begin{align*} \widetilde{e}_k^{(d)}(p_d^{(t,\theta,\lambda)}) &= t^k (\lambda-1)^k (-1)^{d-k}\ \widetilde{e}_k^{(d)} (J_d^{(-t/\theta,\ q+ 1/d)})\\ &= t^k (\lambda-1)^k (-1)^{d-k}\ \frac{(qd+1)_k}{(-\frac{t}{\theta}d)_k}. \end{align*} $$

On the other hand, we get

$$ \begin{align*} \widetilde{e}_k^{(d)}( p_d^{(t,\theta,1)} \boxtimes_d L_d^{(q+ 1/d,\ q^{-1})}) &=\widetilde{e}_k^{(d)}(p_d^{(t,\theta,1)})\widetilde{e}_k^{(d)}(L_d^{(q+ 1/d,\ q^{-1})})\\ &= \frac{t^{2k}}{\theta^k} (-1)^{d-k} \frac{d^k}{(-\frac{t}{\theta}d)_k} \times \frac{\theta^k(\lambda-1)^k}{t^k} \frac{(qd+1)_k}{d^k}\\ &= t^k (\lambda-1)^k (-1)^{d-k}\ \frac{(qd+1)_k}{(-\frac{t}{\theta}d)_k}, \end{align*} $$

as desired.

7.4 Comments

These results (Propositions 7.1 and 7.2) are not essentially new since Martínez-Finkelshtein et al. [Reference Martínez-Finkelshtein, Morales and PeralesMFMP24a, Reference Martínez-Finkelshtein, Morales and PeralesMFMP24b] and Arizmendi et al. [Reference Arizmendi, Fujie, Perales and UedaAFPU24] have already studied the relationship between the asymptotic behavior of the empirical root measures of Jacobi or Bessel polynomials and free probability. Nevertheless, we would like to emphasize that our analysis provides deeper insights into the behavior of the roots of Jacobi or Bessel polynomials, thanks to the understanding gained from the distributional properties of the Meixner-type free gamma distribution and its connection with free entropy. For instance, the weight function (7.1) of $p_d^{(t,\theta ,\lambda )}$ belongs to the Pearson class, which may be connected to the Gibbs measure $\rho _{t,\theta ,\lambda }$ associated with the potential $V_{t,\theta ,\lambda }$ discussed in Section 5.

Acknowledgements

The authors would like to thank Takahiro Hasebe (Hokkaido University) for fruitful discussions in relation to this project. The authors wish to express their sincere gratitude to the anonymous referee for carefully reading the manuscript and providing numerous valuable comments and suggestions, which have substantially improved the article. In particular, we are grateful for the referee’s observations regarding the relation between the measures $\mu _{t,\theta ,\lambda }$ and the centered free Meixner distributions (Proposition 3.2), its connection with Pearson distributions (Remark 5.3), and the orthogonality of $p_d^{(t,\theta ,\lambda )}$ , which have given us deeper insights than originally anticipated. We would like to express our heartfelt thanks once again.

Footnotes

N.S. was supported by JSPS Grant-in-Aid for Scientific Research (C) Grant No. 23K03133 and JSPS Open Partnership Joint Research Projects grant no. JPJSBP120249936. Y.U. was supported by JSPS Grant-in-Aid for Young Scientists Grant No. 22K13925.

1 In 2008, Pérez-Abreu and Sakuma [Reference Pérez-Abreu and SakumaPAS08] introduced another type of free gamma distribution via the Bercovici–Pata bijection, which maps the class of infinitely divisible distributions to their free counterparts. Therefore, we distinguish between two types of free gamma distributions by name.

2 Our R-transform differs from that given in [Reference AnshelevichAns03, p. 238] by a multiplicative factor of z.

3 See, e.g., [Reference Nica and SpeicherNS06] and the references therein for background on free independence.

4 A (Borel-) measure $\nu $ on $\mathbb {R}$ is called a Lévy measure, if $\nu (\{0\})=0$ and $\int _{\mathbb {R}}\min \{1,x^2\}\,\nu (\mathrm {d} x)<\infty $ .

5 To the best of the author’s knowledge, the exact name for this correspondence is unknown, and little is known about the mathematical facts.

6 See [Reference Barndorff-Nielsen and ThorbjørnsenBNT02] for definition of free Lévy processes.

References

Anshelevich, M., Free martingale polynomials . J. Funct. Anal. 201(2003), no. 1, 228261. https://doi.org/10.1016/S0022-1236(03)00061-2 Google Scholar
Arizmendi, O., Fujie, K., Perales, D., and Ueda, Y.. S-transform in finite free probability. Preprint, 2024. arXiv:2408.09337.Google Scholar
Arizmendi, O., Garza-Vargas, J., and Perales, D., Finite free cumulants: Multiplicative convolutions, genus expansion and infinitesimal distributions . Trans. Am. Math. Soc. 376(2023), no. 6, 43834420.Google Scholar
Arizmendi, O. and Hasebe, T., Classical and free infinite divisibility for Boolean stable laws . Proc. Am. Math. Soc. 142(2014), no. 5, 16211632. https://doi.org/10.1090/S0002-9939-2014-12111-3 Google Scholar
Balakrishnan, N., Johnson, N. L., and Kotz, S., Continuous univariate distributions. Vol. 2, Wiley & Sons, Hoboken, 1995.Google Scholar
Barndorff-Nielsen, O. E. and Thorbjørnsen, S., Self-decomposability and Lévy processes in free probability . Bernoulli 8(2002), no. 3, 323366.Google Scholar
Barndorff-Nielsen, O. E. and Thorbjørnsen, S.. Classical and free infinite divisibility and Lévy processes . In: Quantum independent increment processes II, Lecture Notes in Mathematics, 1866, Springer, Berlin, 2006, pp. 33159. https://doi.org/10.1007/11376637_2 Google Scholar
Belinschi, S. and Bercovici, H., Atoms and regularity for measures in a partially defined free convolution semigroup . Math. Z. 248(2004), no. 4, 665674. https://doi.org/10.1007/s00209-004-0671-y Google Scholar
Belinschi, S. T., Bożejko, M., Lehner, F., and Speicher, R., The normal distribution is $\boxplus$ -infinitely divisible . Adv. Math. 226(2011), no. 4, 36773698. https://doi.org/10.1016/j.aim.2010.10.025 Google Scholar
Bercovici, H. and Pata, V., Stable laws and domains of attraction in free probability theory . Ann. Math. 149(1999), no. 3, 10231060. With an appendix by Philippe Biane. https://doi.org/10.2307/121080 Google Scholar
Bercovici, H. and Voiculescu, D., Lévy-Hincin type theorems for multiplicative and additive free convolution . Pac. J. Math. 153(1992), no. 2, 217248.Google Scholar
Bercovici, H. and Voiculescu, D., Free convolution of measures with unbounded support . Indiana Univ. Math. J. 42(1993), no. 3, 733773. https://doi.org/10.1512/iumj.1993.42.42033 Google Scholar
Biane, P., Logarithmic Sobolev inequalities, matrix models and free entropy . Acta Math. Sin. 19(2003), no. 3, 497506. https://doi.org/10.1007/s10114-003-0271-5 Google Scholar
Bondesson, L., Generalized gamma convolutions and related classes of distributions and densities, Lecture Notes in Statistics, 76, Springer, 1992. https://doi.org/10.1007/978-1-4612-2948-3.Google Scholar
Bożejko, M. and Bryc, W., On a class of free Lévy laws related to a regression problem . J. Funct. Anal. 236(2006), no. 1, 5977. https://doi.org/10.1016/j.jfa.2005.09.010.Google Scholar
Dominici, D., Johnston, S. J., and Jordaan, K., Real zeros of ${}_2{F}_1$ hypergeometric polynomials . J. Comput. Appl. Math. 247(2013), 152161.Google Scholar
Dykema, K., Multilinear function series and transforms in free probability theory . Adv. Math. 208(2007), no. 1, 351407. https://doi.org/10.1016/j.aim.2006.02.011 Google Scholar
Fèral, D., The limiting spectral measure of the generalised inverse Gaussian random matrix model . C. R. Acad. Sci. Paris, Ser. I 342(2006), 519522 https://doi.org/10.1016/j.crma.2006.01.017 Google Scholar
Féral, D., On large deviations for the spectral measure of discrete coulomb gas . In: Donati-Martin, C., Émery, M., Rouault, A., and Stricker, C. (eds.), Séminaire de Probabilités XLI, Lecture Notes in Mathematics, 1934, Springer, Berlin, 2008. https://doi.org/10.1007/978-3-540-77913-1_2 Google Scholar
Ferreira, R. A. C. and Simon, T., Convolution of beta prime distribution . Trans. Am. Math. Soc. 376(2023), 855890. https://doi.org/10.1090/tran/8748 Google Scholar
Goldie, C., A class of infinitely divisible random variables . Proc. Camb. Philos. Soc. 63(1967), 11411143. https://doi.org/10.1017/s0305004100042225 Google Scholar
Haagerup, U. and Schultz, H., Brown measures of unbounded operators affiliated with a finite von Neumann algebra . Math. Scand. 100(2007), no. 2, 209263. https://doi.org/10.7146/math.scand.a-15023 Google Scholar
Haagerup, U. and Thorbjørnsen, S., On the free gamma distributions . Indiana Univ. Math. J. 63(2014), no. 4, 11591194. https://doi.org/10.1512/iumj.2014.63.5288 Google Scholar
Hasebe, T., Free infinite divisibility for beta distributions and related ones . Electron. J. Probab. 19(2014), no. 81, 33. https://doi.org/10.1214/EJP.v19-3448 Google Scholar
Hasebe, T., Free infinite divisibility for powers of random variables . ALEA, Lat. Am. J. Probab. Math. Stat. 13(2016), no. 1, 309336. https://doi.org/10.30757/alea.v13-13 Google Scholar
Hasebe, T. and Sakuma, N., Unimodality for free Lévy processes . Ann. Inst. H. Poincaré Probab. Statist. 53(2017), no. 2, 916936. https://doi.org/10.1214/16-AIHP742 Google Scholar
Hasebe, T., Sakuma, N., and Thorbjørnsen, S., The normal distribution is freely self-decomposable . Int. Math. Res. Not. 2019(2019), no. 6, 17581787. https://doi.org/10.1093/imrn/rnx171 Google Scholar
Hasebe, T. and Szpojankowski, K., On free generalized inverse Gaussian distributions . Compl. Anal. Oper. Theory 13(2019), 30913116. https://doi.org/10.1007/s11785-018-0790-9 Google Scholar
Hasebe, T. and Thorbjørnsen, S., Unimodality of the freely selfdecomposable probability laws . J. Theor. Probab. 29(2016), no. 3, 922940. https://doi.org/10.1007/s10959-015-0595-y Google Scholar
Hasebe, T. and Ueda, Y., On the free Lévy measure of the normal distribution . Electron. J. Probab. 28(2023), no. 133, 119. https://doi.org/10.1214/23-ejp1035 Google Scholar
Ismail, M. H. E. and Kelker, D. H., Special functions, Stieltjes transforms and infinite divisibility . SIAM J. Math. Anal. 10(1979), no. 5, 884901. https://doi.org/10.1137/0510083 Google Scholar
Johansson, K., On fluctuations of eigenvalues of random Hermitian matrices . Duke Math. J. 91(1998), 151204. https://doi.org/10.1215/S0012-7094-98-09108-6 Google Scholar
Maejima, M. and Sakuma, N., Selfsimilar free additive processes and freely selfdecomposable distributions . J. Theor. Probab. 36(2023), no. 3, 16671697. https://doi.org/10.1007/s10959-022-01227-4 Google Scholar
Marcus, A. W., Polynomial convolutions and (finite) free probability. Preprint, 2021. arXiv:2108.07054.Google Scholar
Marcus, A. W., Spielman, D. A., and Srivastava, N., Finite free convolutions of polynomials . Probab. Theory Relat. Fields 182(2022), no. 3, 807848.Google Scholar
Martínez-Finkelshtein, A., Morales, R., and Perales, D., Real roots of hypergeometric polynomials via finite free convolution . Int. Math. Res. Not. (2024). https://doi.org/10.1093/imrn/rnae120 Google Scholar
Martínez-Finkelshtein, A., Morales, R., and Perales, D., Zeros of generalized hypergeometric polynomials via finite free convolution. Applications to multiple orthogonality . Constr. Approx. (2025), 170. https://doi.org/10.1007/s00365-025-09703-w Google Scholar
Mingo, J. A. and Speicher, R., Free probability and random matrices, Fields Institute Monographs, 35, Springer, New York, NY, 2017. https://doi.org/10.1007/978-1-4939-6942-5 Google Scholar
Morishita, J. and Ueda, Y., Free infinite divisibility for generalized power distributions with free Poisson term . Probab. Math. Stat. 40(2020), no. 2, 245267. https://doi.org/10.37190/0208-4147.40.2.4 Google Scholar
Moschopoulos, P. G., The distribution of the sum of independent gamma random variables . Ann. Instit. Stat. Math. 37(1985), no. 3, 541544. https://doi.org/10.1007/BF02481123 Google Scholar
Nica, A. and Speicher, R., Lectures on the combinatorics of free probability, volume 335 of London Mathematical Society Lecture Note Series, Cambridge University Press, Cambridge, 2006. https://doi.org/10.1017/CBO9780511735127 Google Scholar
Pearson, K., Contributions to the mathematical theory of evolution, II: Skew variation in homogeneous material . Philos. Trans. R. Soc. Lond. A. 186(1895), 343414. https://doi.org/10.1098/rsta.1895.0010 Google Scholar
Pérez-Abreu, V. and Sakuma, N., Free generalized gamma convolutions . Electron. Commun. Probab. 13(2008), 526539. https://doi.org/10.1214/ECP.v13-1413 Google Scholar
Saff, E. B. and Totik, V., Logarithmic potentials with external fields, Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 316, Springer-Verlag, Berlin, 1997, pp. xvi+505. https://doi.org/10.1007/978-3-662-03329-6 Google Scholar
Saitoh, N. and Yoshida, H., The infinite divisibility and orthogonal polynomials with a constant recursion formula in free probability theory . Probab. Math. Stat. 21(2001), 159170.Google Scholar
Sato, K., Lévy processes and infinitely divisible distributions, Cambridge University Press, Cambridge, 2013. Corrected paperback edition.Google Scholar
Schilling, R. L., Song, R., and Vondracek, Z., Bernstein functions. 2nd ed., De Gruyter & Co, London, 2012. https://doi.org/10.1515/9783110269338 Google Scholar
Schmüdgen, K., Unbounded self-adjoint operators on Hilbert space, Graduate Texts in Mathematics, 265, Springer, Dordrecht, 2012. https://doi.org/10.1007/978-94-007-4753-1 Google Scholar
Steutel, F. W., Note on the infinite divisibility of exponential mixtures . Ann. Math. Statist. 38(1967), 13031305. https://doi.org/10.1214/aoms/1177698806 Google Scholar
Szpojankowski, K., On the Lukacs property for free random variables . Stud. Math. 228(2015), no. 1, 5572. https://doi.org/10.4064/sm228-1-6 Google Scholar
Voiculescu, D., The analogues of entropy and of Fisher’s information measure in free probability theory, I . Commun. Math. Phys. 155(1993), no. 1, 7192.Google Scholar
Yoshida, H., Remarks on a free analogue of the Beta prime distribution . J. Theor. Probab. 33(2020), 13631400. https://doi.org/10.1007/s10959-019-00924-x Google Scholar