Hostname: page-component-68c7f8b79f-lqrcg Total loading time: 0 Render date: 2025-12-22T11:23:16.625Z Has data issue: false hasContentIssue false

Negative dependence in knockout tournaments

Published online by Cambridge University Press:  04 December 2025

Yuting Su*
Affiliation:
University of Science and Technology of China
Zhenfeng Zou*
Affiliation:
University of Science and Technology of China
Taizhong Hu*
Affiliation:
University of Science and Technology of China
*
*Postal address: Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei, Anhui 230026, China.
***Postal address: School of Public Affairs, University of Science and Technology of China, Hefei, Anhui 230026, China. Email address: zfzou@ustc.edu.cn
*Postal address: Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei, Anhui 230026, China.
Rights & Permissions [Opens in a new window]

Abstract

Negative dependence in tournaments has received attention in the literature. The property of negative orthant dependence (NOD) was proved for different tournament models with a special proof for each model. For general round-robin tournaments and knockout tournaments with random draws, Malinovsky and Rinott (2023) unified and simplified many existing results in the literature by proving a stronger property, negative association (NA). For a knockout tournament with a non-random draw, they presented an example to illustrate that ${\boldsymbol{S}}$ is NOD but not NA. However, their proof is not correct. In this paper, we establish the properties of negative regression dependence (NRD), negative left-tail dependence (NLTD), and negative right-tail dependence (NRTD) for a knockout tournament with a random draw and with players being of equal strength. For a knockout tournament with a non-random draw and with equal strength, we prove that ${\boldsymbol{S}}$ is NA and NRTD, while ${\boldsymbol{S}}$ is, in general, not NRD or NLTD.

Information

Type
Original Article
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

1.1. Negative dependence

There is a long history of dependence modeling among multiple sources of randomness in probability, statistics, economics, finance, and operations research. Various notions of positive and negative dependence were introduced in the literature. The notions of negative dependence (except in the bivariate case) are not the mirror image of those of positive dependence. The structures of negative dependence can be more complicated. Popular notions of negative dependence include negative orthant dependence (NOD), negative association (NA [Reference Alam and Saxena2]), weak negative association (WNA [Reference Chen, Embrechts and Wang7]), negatively supermodular dependence (NSMD [Reference Hu11]), negative regression dependence [Reference Dubhashi and Ranjan10, Reference Hu and Xie13], strongly multivariate reverse regular of order 2 [Reference Karlin and Rinott16], pairwise counter-monotonicity [Reference Cheung and Lo8, Reference Lauzier, Lin and Wang17], joint mixability [Reference Puccetti and Wang21, Reference Wang and Wang24], and others.

Recall that a random vector ${\boldsymbol{X}}=(X_1, \ldots, X_n)$ is said to be smaller than another random vector ${\boldsymbol{Y}}=(Y_1, \ldots, Y_n)$ in the usual stochastic order, denoted by ${\boldsymbol{X}}\le_\textrm{st} {\boldsymbol{Y}}$ , if $\mathbb{E} [\varphi ({\boldsymbol{X}})]\le \mathbb{E} [\varphi({\boldsymbol{Y}})]$ holds for all increasing functions $\varphi$ for which the expectations exist [Reference Shaked and Shanthikumar23, Section 4B]. Also, we denote by $[{\boldsymbol{X}}|A]$ any random vector/variable whose distribution is the conditional distribution of ${\boldsymbol{X}}$ given event A. For any ${\boldsymbol{x}}\in \mathbb{R}^n$ and $J\subset [n] := \{1, 2, \ldots, n\}$ , let $\{X_j, j\in J\}$ , $\{X_j\le x_j, j\in J\}$ , and $\{X_j>x_j, j\in J\}$ be abbreviated by ${\boldsymbol{X}}_J$ , ${\boldsymbol{X}}_J\le {\boldsymbol{x}}_J$ , and ${\boldsymbol{X}}_J> {\boldsymbol{x}}_J$ , respectively. Throughout, ‘increasing’ and ‘decreasing’ are used in the weak sense, and $\stackrel {d}{=}$ means equality in distribution.

Definition 1. ([Reference Alam and Saxena2, Reference Joag-dev and Proschan15].) A random vector ${\boldsymbol{X}}$ is said to be NA if, for every pair of disjoint subsets $A_1, A_2\subset [n]$ ,

\[\textrm{Cov} (\psi_1({\boldsymbol{X}}_{A_1}), \psi_2({\boldsymbol{X}}_{A_2})) \le 0\]

whenever $\psi_1$ and $\psi_2$ are coordinatewise increasing such that the covariance exists.

Definition 2. ([Reference Hu11].) A random vector ${\boldsymbol{X}}$ is said to be NSMD if

\[\mathbb{E} [\psi({\boldsymbol{X}})] \le \mathbb{E} [\psi({\boldsymbol{X}}^\perp)],\]

where ${\boldsymbol{X}}^\perp =(X_1^\perp, \ldots, X_n^\perp)$ is a random vector of independent random variables with $X_i^\perp \stackrel {d}{=} X_i$ for each $i\in [n]$ , and $\psi$ is any supermodular function such that the expectations exist. A function $\psi\colon \mathbb{R}^n\to \mathbb{R}$ is said to be supermodular if $\psi({\boldsymbol{x}}\vee {\boldsymbol{y}}) +\psi({\boldsymbol{x}}\wedge {\boldsymbol{y}}) \ge \psi ({\boldsymbol{x}}) + \psi({\boldsymbol{y}})$ for all ${\boldsymbol{x}}, {\boldsymbol{y}}\in\mathbb{R}^n$ , where $\vee$ denotes the componentwise maximum and $\wedge$ denotes the componentwise minimum, that is,

\[{\boldsymbol{x}}\vee {\boldsymbol{y}}=(x_1\vee y_1, x_2\vee y_2, \ldots, x_n\vee y_n),\quad{\boldsymbol{x}} \wedge {\boldsymbol{y}} =(x_1\wedge y_1, x_2\wedge y_2, \ldots, x_n\wedge y_n).\]

Definition 3. ([Reference Dubhashi and Ranjan10].) Let ${{\boldsymbol{X}}}=(X_1, \ldots, X_n)$ be a random vector. ${\boldsymbol{X}}$ is said to be

  1. (1) negatively regression dependent (NRD) if

    (1) \begin{equation} [{\boldsymbol{X}}_I\mid {\boldsymbol{X}}_J={\boldsymbol{x}}_J ]\ge_\textrm{st} [{\boldsymbol{X}}_I\mid {\boldsymbol{X}}_J ={\boldsymbol{x}}_J^\ast ],\end{equation}
    where ${\boldsymbol{x}}_J\le {\boldsymbol{x}}_J^\ast$ , and I and J are any disjoint subsets of [n];
  2. (2) negatively left-tail dependent (NLTD) if (1) is replaced by

    (2) \begin{equation} [{\boldsymbol{X}}_I\mid {\boldsymbol{X}}_J\le{\boldsymbol{x}}_J ]\ge_\textrm{st} [{\boldsymbol{X}}_I\mid {\boldsymbol{X}}_J\le{\boldsymbol{x}}_J^\ast ];\end{equation}
  3. (3) negatively right-tail dependent (NRTD) if (1) is replaced by

    (3) \begin{equation} [{\boldsymbol{X}}_I\mid {\boldsymbol{X}}_J>{\boldsymbol{x}}_J ]\ge_\textrm{st}[{\boldsymbol{X}}_I\mid {\boldsymbol{X}}_J >{\boldsymbol{x}}_J^\ast ].\end{equation}

It should be pointed out that, by a limiting argument, the symbols ‘ $\le$ ’ and ‘ $>$ ’ in (2) and (3) can be replaced by ‘ $<$ ’ and ‘ $\ge$ ’, respectively.

Definition 4. ([Reference Joag-dev and Proschan15].) A random vector ${\boldsymbol{X}}$ is said to be negatively lower-orthant dependent (NLOD) if $\mathbb{P}({\boldsymbol{X}}\le {\boldsymbol{x}}) \le \prod^n_{i=1}\mathbb{P}(X_i\le x_i)$ for all ${\boldsymbol{x}}\in\mathbb{R}^n$ , and negatively upper-orthant dependent (NUOD) if $\mathbb{P}({\boldsymbol{X}}> {\boldsymbol{x}}) \le \prod^n_{i=1}\mathbb{P}(X_i> x_i)$ for all ${\boldsymbol{x}}\in\mathbb{R}^n$ . ${\boldsymbol{X}}$ is said to be negatively orthant dependent (NOD) if ${\boldsymbol{X}}$ is both NLOD and NUOD.

From Definition 3, it is known that ${\boldsymbol{X}}$ is NRD if and only if $-{\boldsymbol{X}}$ is NRD, and that ${\boldsymbol{X}}$ is NLTD if and only if $-{\boldsymbol{X}}$ is NRTD. In Definition 3, if $|J| =1$ , the corresponding NRD, NLTD, and NRTD are denoted by $\textrm{NRD}_1$ , $\textrm{NLTD}_1$ , and $\textrm{NRTD}_1$ [Reference Hu and Yang14]. $\textrm{NRD}_1$ is also called negative dependence through stochastic ordering in Block et al. [Reference Block, Savits and Shaked5]. The implications among the above notions of negative dependence are as follows.

  1. (1) $\textrm{NRD}_1$ implies both $\textrm{NLTD}_1$ and $\textrm{NRTD}_1$ [Reference Barlow and Proschan3, Chapter 5], each of which in turn implies WNA [Reference Chen, Embrechts and Wang7].

  2. (2) $\textrm{NRD}_1$ does not imply NA [Reference Joag-dev and Proschan15, Remark 2.5].

  3. (3) NA implies NSMD [Reference Christofides and Vaggelatou9].

  4. (4) NA does not imply NRD, NLTD, or NRTD (Example 1).

  5. (5) NRTD does not imply NRD or NLTD (Example 4).

  6. (6) Each of NA, WNA, NSMD, NRD, NLTD, and NRTD implies that the NOD property holds.

As a corollary of their Proposition 24, Dubhashi and Ranjan [Reference Dubhashi and Ranjan10] claimed that NRD implies both NLTD and NRTD. The proof of Proposition 24 contains a critical gap; the following implication was used without proof:

(4) \begin{equation}{\boldsymbol{X}}\ \hbox{is NRD}\ \Longrightarrow\ [{\boldsymbol{X}}_I\mid {\boldsymbol{X}}_J={\boldsymbol{x}}_J, X_k>x_k ] \ge_\textrm{st} [{\boldsymbol{X}}_I\mid {\boldsymbol{X}}_J={\boldsymbol{x}}^\ast_J, X_k>x_k ]\end{equation}

whenever ${\boldsymbol{x}}_J \le {\boldsymbol{x}}_J^\ast$ , $x_k\in\mathbb{R}$ , and I and J are any disjoint and proper subsets of $[n]\backslash \{k\}$ . However, the foundational implication (4) is still unknown. Whether NRD implies both NLTD and NRTD remains unresolved. Another unresolved question is whether NRD implies NA.

1.2. Tournaments

A tournament consists of competitions among several players, in which each match involves two players. The following two types of tournaments are considered in this paper.

General constant-sum round-robin tournaments [ Reference Bruss and Ferguson6, Reference Moon20 ]. Assume that each of n players competes against each of the other $n-1$ players. When player i plays against player j, player i gets a random score $X_{ij}$ having a distribution function $F_{ij}$ with support on $[0, r_{ij}]$ , $r_{ij}>0$ , and $X_{ji}=r_{ij} -X_{ij}$ for $i < j$ . We assume that all $\left(\begin{smallmatrix}n\\ 2\end{smallmatrix}\right)$ pairs of random scores $(X_{12}, X_{21}), \ldots, (X_{1n}, X_{n1}), \ldots, (X_{n-1,n}, X_{n,n-1})$ are independent. The total score for player i is defined by $S_i=\sum_{j=1, j\ne i}^n X_{ij}$ for $i\in [n]$ , and the sum $\sum_{i=1}^{n} S_{i} = \sum_{i < j} r_{ij}$ of the total scores is constant. A simple round-robin tournament is a special case with $r_{ij}=1$ and $X_{ij}\in \{0,1\}$ for all $i < j$ . Ross [Reference Ross22] considered a special case with $r_{ij}$ being an integer and $X_{ij}\sim B(r_{ij}, p_{ij})$ , which means that players i and j play $r_{ij}$ independent games, and player i wins with probability $p_{ij}$ .

Knockout tournaments [ Reference Adler, Cao, Karp, Peköz and Ross1, Reference Malinovsky and Rinott19 ]. Consider a knockout tournament with $n =2^\ell$ players, in which player i defeats player j independently of all other duels with probability $p_{ij}$ for all $1\le i\ne j\le n$ . The winners of one round move to the next round, and the defeated players are eliminated from the tournament. The tournament continues until all but one player is eliminated, with that player being declared the winner of the tournament. Let $S_i$ denote the number of games won by player $i\in [n]$ .

1.3. Motivation

Negative dependence in tournaments has received attention in the literature. The property of negative orthant dependence (NOD) was proved for different tournament models, with a special proof for each model; see e.g. Malinovsky and Moon [Reference Malinovsky and Moon18]. For general round-robin tournaments and knockout tournaments with random draws, Malinovsky and Rinott [Reference Malinovsky and Rinott19] unified and simplified many existing results in the literature by proving a stronger property, NA, a generalization leading to a simple proof. For a knockout tournament with a non-random draw, they presented an example to illustrate that ${\boldsymbol{S}}$ is NOD but not NA. However, their proof is not correct. For more details, see the paragraph after Example 4.

The purpose of this note is to investigate negative regression dependence for two types of tournaments described in Section 1.2. More precisely, a counterexample is given in Section 2 to show that, for a general constant-sum round-robin tournament, ${\boldsymbol{S}}$ does not possess the property of NRD, NRTD, and NLTD. In Section 3 we establish the properties of NRD, NLTD, and NRTD for a knockout tournament with a random draw and with players being of equal strength (Theorem 1) by proving that such properties are possessed by a random permutation (in fact, the random score vector ${\boldsymbol{S}}$ has a permutation distribution). For a knockout tournament with a non-random draw and with equal strength, we prove that ${\boldsymbol{S}}$ is NA (and hence NSMD) and NRTD (Theorems 2 and 3), while ${\boldsymbol{S}}$ is, in general, not NRD or NLTD (Example 4). This is an interesting finding.

This paper is organized as follows. The models of round-robin and knockout tournaments are considered in Sections 2 and 3, respectively.

2. Constant-sum round-robin tournaments

For a general constant-sum round-robin tournament described in Section 1.2, Malinovsky and Rinott [Reference Malinovsky and Rinott19] proved that ${\boldsymbol{S}}=(S_1, S_2, \ldots, S_n)$ is NA. The next counterexample shows that ${\boldsymbol{S}}$ is not NRD, NLTD, or NRTD.

Example 1. Consider the case of three players $(n=3$ ), and let $X_{12}=1-X_{21}\sim B(1, 1/2)$ , $X_{13}=5-X_{31}\sim U(\{0, 2, 5\})$ , and $X_{23}=5-X_{32}\sim U(\{0,2, 5\})$ , where $X_{12}$ , $X_{13}$ , and $X_{23}$ are independent, and $U(\{0,2,5\})$ is the discrete uniform distribution on $\{0, 2, 5\}$ . Then $S_1=X_{12}+X_{13}$ , $S_2=X_{21}+X_{23}$ , and $S_3= X_{31}+ X_{32}$ . Obviously, we have

\begin{align*}\mathbb{P}(S_3=0) & =\mathbb{P}(S_3=6) = \mathbb{P}(S_3=10) =\dfrac {1}{9},\\\mathbb{P}(S_3=3) & = \mathbb{P}(S_3=5) = \mathbb{P}(S_3=8) =\dfrac {2}{9}.\end{align*}

Let $f\colon \mathbb{N}^2\to\mathbb{R}$ be an increasing and symmetric function satisfying

\begin{align*}f(0,1) & =f(0,6)=f(1,2)=f(1,5)=1,\quad f(2,3) =f(5,6)=2.\end{align*}

Then

\begin{align*}\mathbb{E} [\,f(S_1,S_2)\mid S_3=0] & =\mathbb{E}[\,f(5+X_{12}, 5+X_{21})\mid X_{13}=X_{23}=5]\\ & = \dfrac {1}{2} [\,f(5,6)+ f(6,5)] \\ &=2,\\ \mathbb{E} [\,f(S_1,S_2)\mid S_3=3] & =\mathbb{E} [\,f(X_{13}+X_{12}, X_{23}+X_{21})\mid (X_{13}, X_{23})\in \{(5,2), (2,5)\}]\\ & = \dfrac {1}{4} [\,f(3,5)+ f(5,3)+ f(2,6)+f(6,2)] \\ &=2,\\ \nonumber\\[-2pt] \mathbb{E} [\,f(S_1,S_2)\mid S_3=5] & =\mathbb{E} [\,f(X_{13}+X_{12}, X_{23}+X_{21})\mid (X_{13}, X_{23})\in \{(5,0), (0,5)\}]\\ & = \dfrac {1}{4} [\,f(1,5)+ f(5,1)+ f(0,6)+f(6,0)] \\ &=1,\\ \mathbb{E} [\,f(S_1,S_2)\mid S_3=6] & =\mathbb{E} [\,f(2+X_{12}, 2+X_{21})\mid X_{13}=2, X_{23}=2]\\ & = \dfrac {1}{2} [\,f(2,3)+ f(3,2)] \\ &=2,\\ \mathbb{E} [\,f(S_1,S_2)\mid S_3=8] & =\mathbb{E} [\,f(X_{13}+X_{12}, X_{23}+X_{21})\mid (X_{13}, X_{23})\in \{(2,0), (0,2)\}]\\ & = \dfrac {1}{4} [\,f(1,2)+ f(2,1)+ f(0,3)+f(3,0)]\\ & =1,\\ \mathbb{E} [\,f(S_1,S_2)\mid S_3=10] & =\mathbb{E}[\,f(X_{12}, X_{21})\mid X_{13}=X_{23}=0]\\ & = \dfrac {1}{2} [\,f(0,1)+ f(1,0)]\\ & =1.\end{align*}

Hence

\begin{align*}\mathbb{E} [\,f(S_1,S_2)\mid S_3=5] =1 & < 2=\mathbb{E} [\,f(S_1,S_2)\mid S_3=6],\\\mathbb{E} [\,f(S_1,S_2)\mid S_3 \le 5] = \dfrac {8}{5} & < \dfrac {5}{3} =\mathbb{E} [\,f(S_1,S_2)\mid S_3 \le 6],\\\mathbb{E} [\,f(S_1,S_2)\mid S_3 \ge 5] = \dfrac {7}{6} & < \dfrac {5}{4} = \mathbb{E} [\,f(S_1,S_2)\mid S_3\ge 6].\end{align*}

This means that $(S_1, S_2, S_3)$ is not NRD, NLTD, or NRTD.

Ross [Reference Ross22] proved that ${\boldsymbol{S}}$ is $\textrm{NRD}_1$ and hence $\textrm{NLTD}_1$ and $\textrm{NRTD}_1$ when all $X_{ij}$ are log-concave, that is, $X_{ij}$ has a log-concave probability density function on $\mathbb{R}$ or a log-concave probability mass function on $\mathbb{Z}$ . It is still an open problem to investigate conditions on $F_{ij}$ under which ${\boldsymbol{S}}$ is NRD, NRTD, or NLTD.

3. Knockout tournaments

3.1. Knockout tournaments with a random draw

For a knockout tournament with $n=2^\ell$ players, a random draw means that in the first round, all $2^\ell$ players are randomly arranged into $2^{\ell-1}$ match pairs. The winners of these $2^{\ell-1}$ matches move to the second round, and they are randomly arranged into $2^{\ell-2}$ match pairs, and so on. Let $S_i$ denote the number of games won by player $i\in [n]$ .

For a knockout tournament with a random draw, Malinovsky and Rinott [Reference Malinovsky and Rinott19] proved that ${\boldsymbol{S}}=(S_1, \ldots, S_n)$ is NA (and hence NSMD) when the players are of equal strength, that is, $p_{ij}=1/2$ for all $i\ne j$ , and gave a counterexample to show that ${\boldsymbol{S}}$ is not NA without equal strength. This counterexample can also be used to illustrate that ${\boldsymbol{S}}$ is not NRD, NLTD, or NRTD in a knockout tournament with a random draw and without equal strength.

Example 2. Consider a knockout tournament with four players. Player 1 beats player 2 with probability 1, and loses to players 3 and 4 with probability 1. Player 2 beats players 3 and 4 with probability 1, and player 3 beats player 4 with probability 1. With a random draw, according to which of the different players that player 1 meets in the first round, we have

\[{\boldsymbol{S}}=\begin{cases} (1, 0, 2, 0), & \hbox{with prob.}\ 1/3, \\(0, 2, 1, 0), & \hbox{with prob.}\ 1/3, \\(0, 2, 0, 1), & \hbox{with prob.}\ 1/3. \end{cases}\]

Then

\begin{align*}\mathbb{P}(S_3=2 \mid S_1=1) & =1,\quad \mathbb{P}(S_3=0 \mid S_1=0) = \mathbb{P}(S_3=1 \mid S_1=0) =\dfrac {1}{2},\end{align*}

which implies that

\begin{align*}\mathbb{E} [S_3\mid S_1=0] =\dfrac {1}{2} & < 2=\mathbb{E} [S_3 \mid S_1=1],\\\mathbb{E} [S_3\mid S_1\le 0] =\dfrac {1}{2} & < 1=\mathbb{E} [S_3]=\mathbb{E} [S_3\mid S_1\le 1],\\\mathbb{E} [S_3\mid S_1\ge 0] =1 & < 2=\mathbb{E} [S_3\mid S_1\ge 1].\end{align*}

This means that ${\boldsymbol{S}}$ is not NRD, NLTD, or NRTD. If the probability of 1 is replaced by $1-\epsilon$ for small $\epsilon>0$ , then the same result holds by a continuity argument.

Under the assumption that players have equal probabilities in each duel, the NRD, NLTD, and NRTD properties hold for ${\boldsymbol{X}}$ , as stated in the next theorem.

Theorem 1. Consider a knockout tournament with $n=2^\ell$ players of equal strength. If the schedule of matches is random, then ${\boldsymbol{S}}$ is NRD, NLTD, and NRTD.

Proof. As pointed out by Malinovsky and Rinott [Reference Malinovsky and Rinott19] in the proof of their Proposition 2, the vector ${\boldsymbol{S}}$ is a random permutation of the following vector:

\[\biggl(\underbrace{0, \ldots, 0}_{2^{\ell-1}}, \underbrace{1, \ldots, 1}_{2^{\ell-2}},\ldots, \underbrace{k,\ldots, k}_{2^{\ell-k-1}}, \ldots, \underbrace{\ell-1}_{1}, \ell \biggr),\]

in which the component k ( $k\in \{0, 1, \ldots, \ell-1$ ) appears $2^{\ell-k-1}$ times, and the component $\ell$ appears once. The desired result now follows from Lemma 1 below.

A vector ${\boldsymbol{X}}=(X_1, \ldots, X_n)$ is a random permutation of ${\boldsymbol{x}}=(x_1, \ldots, x_n)$ if ${\boldsymbol{X}}$ takes as values of all $n!$ permutations of ${\boldsymbol{x}}$ with probability $1/n!$ , where $x_1, \ldots, x_n$ are any real numbers. Throughout, when we write $[{\boldsymbol{W}}\mid {\boldsymbol{W}}\in A]$ for a random vector (variable) and a suitable chosen set A, it is always assumed that $\mathbb{P}({\boldsymbol{W}}\in A)>0$ .

Lemma 1. A random permutation is NRD, NLTD, and NRTD.

Proof. Let ${\boldsymbol{X}}$ be a random vector with permutation distribution on $\Lambda=\{x_1, x_2, \ldots, x_n\}$ . First consider the special case when the $x_i$ are distinct. Hence, without loss of generality, assume that $\Lambda=[n]$ .

(1) To prove the NRD property of ${\boldsymbol{X}}$ , it suffices to prove that, for any increasing function $\psi\colon \mathbb{R}^{n-k}\to \mathbb{R}$ , $\mathbb{E} [\psi({\boldsymbol{X}}_{[n]\backslash [k]}) \mid {\boldsymbol{X}}_{[k]}={\boldsymbol{r}}_{[k]}]$ is decreasing in ${\boldsymbol{r}}_{[k]}$ , where $k\in [n-1]$ . Without loss of generality, assume that $\psi$ is symmetric since the distribution of ${\boldsymbol{X}}$ is symmetric. For suitably chosen ${\boldsymbol{r}}_{[k]}$ and ${\boldsymbol{r}}_{[k]}'$ such that ${\boldsymbol{r}}_{[k]}\le {\boldsymbol{r}}_{[k]}'$ , denote $\{s_j, j\in [n-k]\}=[n]\backslash \{r_i, i\in [k]\}$ and $\{s_j', j\in [n-k]\}= [n]\backslash \{r_i', i\in [k]\}$ . Then $s_{(j)}\ge s'_{(j)}$ for $j\in [n-k]$ , where $s_{(1)}\le s_{(2)}\le \cdots\le s_{(n-k)}$ denotes the ordered values of $\{s_j, j\in [n-k]\}$ . Therefore

\begin{align*}\mathbb{E}\bigl[\psi({\boldsymbol{X}}_{[n]\backslash [k]})\mid {\boldsymbol{X}}_{[k]}={\boldsymbol{r}}_{[k]}\bigr] & =\psi({\boldsymbol{s}}_{[n-k]})=\psi(s_{(1)},\ldots, s_{(n-k)}) \\& \ge \psi\bigl(s'_{(1)}, \ldots, s'_{(n-k)}\bigr) \\&=\mathbb{E}\bigl[\psi({\boldsymbol{X}}_{[n]\backslash [k]})\mid {\boldsymbol{X}}_{[k]}={\boldsymbol{r}}'_{[k]}\bigr],\end{align*}

which implies X is NRD.

(2) To prove the NRTD property of ${\boldsymbol{X}}$ , it suffices to prove that, for any increasing and symmetric function $\psi\colon \mathbb{R}^{n-k}\to \mathbb{R}$ , the function $\mathbb{E} \bigl[\psi({\boldsymbol{X}}_{[n]\backslash [k]}) \mid {\boldsymbol{X}}_{[k]}\ge {\boldsymbol{r}}_{[k]}\bigr]$ is decreasing in ${\boldsymbol{r}}_{[k]}$ , where $1\le k < n$ . By symmetry of the distribution of ${\boldsymbol{X}}$ , this is also equivalent to verifying that

(5) \begin{align}&\mathbb{E}\bigl[\psi({\boldsymbol{X}}_{[n]\backslash [k]})\mid X_1\ge r_1, {\boldsymbol{X}}_{[k]\backslash \{1\}}\ge {\boldsymbol{r}}_{[k]\backslash \{1\}}\bigr] \notag \\&\quad \ge\mathbb{E}\bigl[\psi({\boldsymbol{X}}_{[n]\backslash [k]})\mid X_1\ge r_1^\ast, {\boldsymbol{X}}_{[k]\backslash \{1\}}\ge {\boldsymbol{r}}_{[k]\backslash \{1\}}\bigr]\end{align}

whenever $r_1 < r_1^\ast$ . To prove (5), by a similar argument to that in the proof of Theorem 5.4.2 in [Reference Barlow and Proschan3], it is required to show that

(6) \begin{align}&\mathbb{E}\bigl[\psi({\boldsymbol{X}}_{[n]\backslash [k]})\mid X_1= r_1, {\boldsymbol{X}}_{[k]\backslash \{1\}}\ge {\boldsymbol{r}}_{[k]\backslash \{1\}}\bigr] \notag \\&\quad \ge\mathbb{E}\bigl[\psi({\boldsymbol{X}}_{[n]\backslash [k]})\mid X_1= r_1^\ast, {\boldsymbol{X}}_{[k]\backslash \{1\}}\ge {\boldsymbol{r}}_{[k]\backslash \{1\}}\bigr],\end{align}

where $r_1\in [n]$ , $r_1^\ast\in [n]$ such that $r_1 < r_1^\ast$ . For $k=0$ , both sides in (6) reduce to $\mathbb{E} [\psi({\boldsymbol{X}}_{[n]})]$ , an unconditional expectation. We consider this special case $k=0$ for convenience of the following proof by induction.

Let ${\boldsymbol{b}}=(b_1, \ldots, b_n)$ and ${\boldsymbol{c}}=(c_1, \ldots, c_n)$ be any two real vectors satisfying $b_1>c_1$ and $b_i=c_i$ for $i\in [n]\backslash \{1\}$ , and let ${\boldsymbol{Y}}$ and ${\boldsymbol{Z}}$ be two random vectors having respective permutation distributions on ${\boldsymbol{b}}$ and ${\boldsymbol{c}}$ . We claim that

(7) \begin{equation}\mathbb{E}\bigl[\psi({\boldsymbol{Y}}_{[n]\backslash [k]})\mid {\boldsymbol{Y}}_{[k]}\ge {\boldsymbol{x}}_{[k]}\bigr] \ge\mathbb{E}\bigl[\psi({\boldsymbol{Z}}_{[n]\backslash [k]})\mid {\boldsymbol{Z}}_{[k]}\ge {\boldsymbol{x}}_{[k]}\bigr]\end{equation}

for $k\in [n-1]$ and any ${\boldsymbol{x}}_{[k]}$ . Now, we prove (6) and (7) synchronously by induction on k. For $k=0$ , (6) is trivial, and

\begin{align*}\mathbb{E}\bigl[\psi({\boldsymbol{Y}}_{[n]\backslash [k]})\mid {\boldsymbol{Y}}_{[k]}\ge {\boldsymbol{r}}_{[k]}\bigr] =\psi({\boldsymbol{b}})\ge \psi({\boldsymbol{c}})=\mathbb{E}\bigl[\psi({\boldsymbol{Z}}_{[n]\backslash [k]})\mid {\boldsymbol{Z}}_{[k]}\ge {\boldsymbol{r}}_{[k]}\bigr],\end{align*}

implying (7). That is, (6) and (7) hold for $k=0$ . Assume that (7) holds for $k=m-1$ . For $k=m$ , it is easy to see that

(8) \begin{align} \bigl[{\boldsymbol{X}}_{[n]\backslash [m]}\mid X_1=r_1,{\boldsymbol{X}}_{[m]\backslash \{1\}}\ge{\boldsymbol{r}}_{[m]\backslash \{1\}}\bigr]& \stackrel {d}{=}\bigl[\widetilde{{\boldsymbol{Y}}}_{[n-1]\backslash [m-1]}\mid \widetilde {{\boldsymbol{Y}}}_{[m-1]}\ge{\boldsymbol{r}}_{[m]\backslash \{1\}}\bigr], \end{align}
(9) \begin{align} \bigl[{\boldsymbol{X}}_{[n]\backslash [m]}\mid X_1=r_1^\ast,{\boldsymbol{X}}_{[m]\backslash \{1\}} \ge {\boldsymbol{r}}_{[m]\backslash \{1\}}\bigr] & \stackrel {d}{=}\bigl[\widetilde{{\boldsymbol{Z}}}_{[n-1]\backslash [m-1]}\mid \widetilde{{\boldsymbol{Z}}}_{[m-1]}\ge{\boldsymbol{r}}_{[m]\backslash \{1\}}\bigr], \end{align}

where $\widetilde{{\boldsymbol{Y}}}$ has a permutation distribution on $[n]\backslash\{r_1\}$ , and $\widetilde{{\boldsymbol{Z}}}$ has a permutation distribution on $[n]\backslash\{r_1^\ast\}$ . Thus (6) holds for $k=m$ by applying the induction assumption (7) with $k=m-1$ to (8) and (9). Therefore, by the symmetry of the distribution of ${\boldsymbol{Z}}$ , we conclude from (6) with $k=m$ that

\begin{align*}&\mathbb{E}\bigl[\psi({\boldsymbol{Z}}_{[n]\backslash [m]})\mid Z_i= c_1, {\boldsymbol{Z}}_{[m]\backslash \{i\}} \ge {\boldsymbol{x}}_{[m]\backslash \{i\}}\bigr]\\&\quad \ge \mathbb{E}\bigl[\psi({\boldsymbol{Z}}_{[n]\backslash [m]})\mid Z_i=c_1^\ast, {\boldsymbol{Z}}_{[m]\backslash \{i\}}\ge {\boldsymbol{x}}_{[m]\backslash \{i\}}\bigr]\end{align*}

when $c_1< c_1^\ast$ and $i\in [m]$ . Consequently, we have

(10) \begin{equation}\mathbb{E}\bigl[\psi({\boldsymbol{Z}}_{[n]\backslash [m]})\mid Z_i= c_1, {\boldsymbol{Z}}_{[m]\backslash \{i\}}\ge {\boldsymbol{x}}_{[m]\backslash \{i\}}\bigr] \ge \mathbb{E}\bigl[\psi({\boldsymbol{Z}}_{[n]\backslash [m]})\mid {\boldsymbol{Z}}_{[m]}\ge {\boldsymbol{x}}_{[m]}\bigr]\end{equation}

when $c_1<x_i$ for $i\in [m]$ . Next, we show (7) for $k=m$ . To this end, denote by $\mathscr{O}_n$ the set of all permutations on [n]. For each $\pi=(\pi(1), \ldots, \pi(n))\in\mathscr{O}_n$ and ${\boldsymbol{x}}=(x_1,\ldots, x_n)\in\mathbb{R}^n$ , denote ${\boldsymbol{x}}^\pi=(x_{\pi(1)}, \ldots, x_{\pi(n)})$ . Define the following sets of permutations on [n] as follows:

\begin{align*}\Pi_0 & = \bigl\{\pi\in\mathscr{O}_n\colon {\boldsymbol{c}}^\pi_{[m]}\ge {\boldsymbol{x}}_{[m]} \bigr\},\\\Pi_i & =\bigl\{\pi\in\mathscr{O}_n\colon \pi(i)=1, {\boldsymbol{b}}^\pi_{[m]} \ge {\boldsymbol{x}}_{[m]}, c_1< x_i\bigr\},\quad i\in [m].\end{align*}

Then

\begin{align*}\bigl\{{\boldsymbol{Y}}_{[m]} \ge {\boldsymbol{x}}_{[m]}\bigr\} &= \bigcup^m_{i=0} \bigcup_{\pi\in\Pi_i} \{{\boldsymbol{Y}}={\boldsymbol{b}}^\pi\},\quad \bigl\{{\boldsymbol{Z}}_{[m]} \ge {\boldsymbol{x}}_{[m]}\bigr\} =\bigcup_{\pi\in\Pi_0} \{{\boldsymbol{Z}}={\boldsymbol{c}}^\pi\}.\end{align*}

Thus we have

(11) \begin{equation}\mathbb{E} \bigl[\psi\bigl({\boldsymbol{Y}}_{[n]\backslash [m]}\bigr) \mid {\boldsymbol{Y}}_{[m]}\ge {\boldsymbol{x}}_{[m]}\bigr] = \dfrac {\sum^m_{i=0} \sum_{\pi\in \Pi_i} \mathbb{P}({\boldsymbol{Y}}={\boldsymbol{b}}^\pi)\, \psi \bigl({\boldsymbol{b}}^\pi_{[n]\backslash [m]}\bigr)} {\sum^m_{i=0} \sum_{\pi\in \Pi_i} \mathbb{P}({\boldsymbol{Y}}={\boldsymbol{b}}^\pi)}.\end{equation}

Since ${\boldsymbol{b}}\ge {\boldsymbol{c}}$ and $\psi$ is increasing, it follows that

(12) \begin{align}\dfrac {\sum_{\pi\in \Pi_0} \mathbb{P}({\boldsymbol{Y}}={\boldsymbol{b}}^\pi)\, \psi ({\boldsymbol{b}}^\pi_{[n]\backslash [m]} )} {\sum_{\pi\in \Pi_0} \mathbb{P}({\boldsymbol{Y}}={\boldsymbol{b}}^\pi)}& \ge \dfrac {\sum_{\pi\in \Pi_0} \mathbb{P}({\boldsymbol{Y}}={\boldsymbol{c}}^\pi)\, \psi \big({\boldsymbol{c}}^\pi_{[n]\backslash [m]}\big )} {\sum_{\pi\in \Pi_0} \mathbb{P}({\boldsymbol{Y}}={\boldsymbol{c}}^\pi)} \nonumber \\& = \dfrac {\sum_{\pi\in \Pi_0} \mathbb{P}({\boldsymbol{Z}}={\boldsymbol{c}}^\pi)\, \psi \big ({\boldsymbol{c}}^\pi_{[n]\backslash [m]}\big )} {\sum_{\pi\in \Pi_0} \mathbb{P}({\boldsymbol{Z}}={\boldsymbol{c}}^\pi)} \nonumber \\& = \mathbb{E} \bigl[\psi\bigl({\boldsymbol{Z}}_{[n]\backslash [m]}\bigr) \mid {\boldsymbol{Z}}_{[m]} \ge {\boldsymbol{x}}_{[m]}\bigr]. \end{align}

Noting that $\pi(i)=1$ for each $i\in [m]$ such that $\Pi_i \ne \emptyset$ , and that ${\boldsymbol{b}}^{\pi}_{[n]\backslash [m]} = c^{\pi}_{[n]\backslash [m]}$ , we have

(13) \begin{align}\dfrac {\sum_{\pi\in \Pi_i} \mathbb{P}({\boldsymbol{Y}}={\boldsymbol{b}}^\pi)\, \psi \big ({\boldsymbol{b}}^\pi_{[n]\backslash [m]}\big )} {\sum_{\pi\in \Pi_i} \mathbb{P}({\boldsymbol{Y}}={\boldsymbol{b}}^\pi)}& = \dfrac {\sum_{\pi\in \Pi_i} \mathbb{P}({\boldsymbol{Z}}={\boldsymbol{c}}^\pi)\,\psi\big ({\boldsymbol{c}}^\pi_{[n]\backslash [m]}\big )} {\sum_{\pi\in \Pi_i} \mathbb{P}({\boldsymbol{Z}}={\boldsymbol{c}}^\pi)} \nonumber \\& = \mathbb{E} \bigl[\psi\bigl({\boldsymbol{Z}}_{[n]\backslash [m]}\bigr) \mid Z_i=c_1, {\boldsymbol{Z}}_{[m]\backslash\{i\}} \ge {\boldsymbol{x}}_{[m]\backslash\{i\}}\bigr]\nonumber \\& \ge \mathbb{E}\bigl[\psi\bigl({\boldsymbol{Z}}_{[n]\backslash [m]}\bigr)\mid {\boldsymbol{Z}}_{[m]}\ge {\boldsymbol{x}}_{[m]}\bigr], \end{align}

where the last inequality follows from (10). In view of (12) and (13), it follows from (11) that

\[\mathbb{E} \bigl[\psi\bigl({\boldsymbol{Y}}_{[n]\backslash [m]}\bigr)\mid {\boldsymbol{Y}}_{[m]}\ge {\boldsymbol{x}}_{[m]}\bigr] \ge \mathbb{E}\bigl[\psi\bigl({\boldsymbol{Z}}_{[n]\backslash [m]}\bigr)\mid {\boldsymbol{Z}}_{[m]}\ge {\boldsymbol{x}}_{[m]}\bigr],\]

which implies that (7) holds for $k=m$ . Therefore the desired results (6) and (7) hold by induction. This proves that ${\boldsymbol{X}}$ is NRTD.

(3) The NLTD property of ${\boldsymbol{X}}$ follows from the facts that $-{\boldsymbol{X}}$ also has a permutation distribution, and that ${\boldsymbol{X}}$ is NLTD if and only if $-{\boldsymbol{X}}$ is NRTD.

Finally, consider the general case $\Lambda=\{x_1, \ldots, x_n\}$ with $x_i=x_j$ for at least one pair $i\ne j$ . A careful check shows that the above proof for the special case is still valid for the general case. This proves the desired result.

From the proof of Lemma 1, we conclude that if ${\boldsymbol{Y}}$ and ${\boldsymbol{Z}}$ have respective permutation distributions on ${\boldsymbol{b}}$ and ${\boldsymbol{c}}$ with ${\boldsymbol{b}}, {\boldsymbol{c}} \in \mathbb{R}^n$ such that ${\boldsymbol{b}}\ge {\boldsymbol{c}}$ , then

\begin{align*}[{\boldsymbol{Z}}_L \mid {\boldsymbol{Z}}_I \ge {\boldsymbol{x}}_I] & \le_\textrm{st} [{\boldsymbol{Y}}_L \mid {\boldsymbol{Y}}_I \ge {\boldsymbol{x}}_I], \\ [{\boldsymbol{Z}}_L \mid {\boldsymbol{Z}}_I \le {\boldsymbol{x}}_I] & \le_\textrm{st} [{\boldsymbol{Y}}_L \mid {\boldsymbol{Y}}_I \le {\boldsymbol{x}}_I],\end{align*}

for ${\boldsymbol{x}}\in\mathbb{R}^n$ , where I and L are two disjoint proper subsets of [n]. In fact, we have the following conjecture.

Conjecture 1. Let I, J, K, and L be four disjoint subsects of $[n]$ , where one or two of I, J, and K may be an empty set. If ${\boldsymbol{X}}$ is a random vector with permutation distribution on $\{a_1, \ldots, a_n\}$ , then, for any increasing function $\psi\colon \mathbb{R}^{|L|}\to \mathbb{R}$ and any suitable chosen ${\boldsymbol{x}}_I$ , ${\boldsymbol{x}}_J$ , and ${\boldsymbol{x}}_K$ ,

\[\mathbb{E} [\psi({\boldsymbol{X}}_L)\mid {\boldsymbol{X}}_I\ge {\boldsymbol{x}}_I, {\boldsymbol{X}}_J\le {\boldsymbol{x}}_J, {\boldsymbol{X}}_K= {\boldsymbol{x}}_K]\]

is decreasing in ${\boldsymbol{x}}_I$ , ${\boldsymbol{x}}_J$ , and ${\boldsymbol{x}}_K$ .

3.2. Knockout tournaments with a non-random draw

The next counterexample shows that ${\boldsymbol{S}}$ is not NRD, NLTD, or NRTD in a knockout tournament with a deterministic draw and without equal strength.

Example 3. Consider a knockout tournament with four players. Player 1 beats player 2 with probability $1/2$ , and loses to players 3 and 4 with probability 1. Player 2 beats players 3 and 4 with probability 1, and player 3 beats player 4 with probability $1/2$ . In the first round, players 1 and 2 are in one duel, and players 3 and 4 are in another duel. Then

\[{\boldsymbol{S}}=\begin{cases} (1, 0, 2, 0), & \hbox{with prob.}\ 1/4, \\(0, 2, 1, 0), & \hbox{with prob.}\ 1/4,\\(1, 0, 0, 2), & \hbox{with prob.}\ 1/4, \\(0, 2, 0, 1), & \hbox{with prob.}\ 1/4, \end{cases}\]

and hence

\begin{align*}\mathbb{P}(S_3=0 \mid S_1=1) & =\mathbb{P}(S_3=2\mid S_1=1)=\dfrac {1}{2},\\\mathbb{P}(S_3=0 \mid S_1=0) & = \mathbb{P}(S_3=1 \mid S_1=0) =\dfrac {1}{2}.\end{align*}

It is easy to see that

\begin{align*}\mathbb{E} [S_3\mid S_1=0] & =\dfrac {1}{2}<1=\mathbb{E} [S_3\mid S_1=1],\\\mathbb{E} [S_3\mid S_1\le 0] & =\dfrac {1}{2} <\dfrac {3}{4}=\mathbb{E} [S_3\mid S_1\le 1],\\\mathbb{E} [S_3\mid S_1\ge 0] & =\dfrac {3}{4} <1=\mathbb{E} [S_3\mid S_1\ge 1],\end{align*}

which implies that ${\boldsymbol{S}}$ is not NRD, NLTD, or NRTD.

Example 4 shows that ${\boldsymbol{S}}$ is not NRD or NLTD in a knockout tournament with a deterministic draw and with equal strength.

Table 1. Probability mass function of ${\boldsymbol{S}}$ .

Example 4. Consider a knockout tournament with four players of equal strength. In the first round, player 1 plays against player 2, and player 3 against player 4. Then ${\boldsymbol{S}}$ has eight outcomes, the permutations of (0, 0, 1,2) with only one of the first two coordinates must be positive. See Table 1. To see that ${\boldsymbol{S}}$ is not NRD or NLTD, note that

\begin{align*}\mathbb{P}(S_3=0\mid S_1=0) &=\dfrac {1}{2},\quad \mathbb{P}(S_3=1\mid S_1=0)=\mathbb{P}(S_3=2\mid S_1=0)=\dfrac {1}{4}, \\\mathbb{P}(S_3=0\mid S_1=1) &= \mathbb{P}(S_3=2 \mid S_1=1) =\dfrac {1}{2},\\\mathbb{P}(S_3=0\mid S_1=2) &= \mathbb{P}(S_3=1 \mid S_1=2) =\dfrac {1}{2}.\end{align*}

Then

\begin{align*}\mathbb{E} [S_3\mid S_1=0] & =\dfrac {3}{4}<1=\mathbb{E} [S_3\mid S_1=1],\\\mathbb{E} [S_3\mid S_1\le 0] & =\dfrac {3}{4} <\dfrac {5}{6}=\mathbb{E} [S_3\mid S_1\le 1],\end{align*}

which implies that ${\boldsymbol{S}}$ is not NRD or NLTD. However, in this example with four players, ${\boldsymbol{S}}$ is NRTD, as can be seen by observing that

\begin{align*}[(S_2, S_3, S_4)\mid S_1\ge 0] & \ge_\textrm{st} [(S_2, S_3, S_4)\mid S_1\ge 1] \ge_\textrm{st} [(S_2, S_3, S_4)\mid S_1\ge 2],\\ [(S_3, S_4)\mid S_1\ge 0, S_2\ge 0] & \ge_\textrm{st} [(S_3, S_4)\mid S_1\ge 1, S_2\ge 0],\\ [(S_2, S_4)\mid S_1\ge 0, S_3\ge 0] & \ge_\textrm{st} [(S_2, S_4)\mid S_1\ge 1, S_3\ge 0]\ge_\textrm{st} [(S_2, S_4)\mid S_1\ge 1, S_3\ge 1],\\ [(S_2, S_4)\mid S_1\ge 1, S_3\ge 0] & \ge_\textrm{st} [(S_2, S_4)\mid S_1\ge 2, S_3\ge 0]\ge_\textrm{st} [(S_2, S_4)\mid S_1\ge 2, S_3\ge 1].\end{align*}

Malinovsky and Rinott [Reference Malinovsky and Rinott19] used Example 4 to show that ${\boldsymbol{S}}$ is not NA. However, their proof is not correct. They claimed that $\mathbb{E} [\,f_1(S_1, S_3)f_2(S_2, S_4)]=1/8$ , $\mathbb{E} [\,f_1(S_1, S_3)]=1/4$ , $\mathbb{E} [\,f_2(S_2, S_4)]=1/8$ , and thus

(14) \begin{equation}\textrm{Cov} (f_1(S_1, S_3), f_2(S_2, S_4))>0\end{equation}

for two increasing functions $f_1(x_1, x_3)$ and $f_2(x_2, x_4)$ , where $f_1$ takes the value 0 everywhere apart from $f_1(0,1)=f_1(0,2)=1$ , and $f_2$ takes the value 0 everywhere apart from $f_2(2, 0)=1$ . Such functions $f_1$ and $f_2$ do not exist since the monotonicity of $f_1$ and $f_2$ implies that $f_1(k, 2)\ge f_1(k,1)\ge 1$ and $f_2(2, k)\ge 1$ for $k=1, 2$ . Thus the functions $f_1$ and $f_2$ are not increasing. Therefore (14) does not hold. We will show ${\boldsymbol{S}}$ is NA in Theorem 2 for a knockout tournament with a deterministic draw and with equal strength.

To establish the NA and NSMD properties of ${\boldsymbol{S}}$ , we need two useful lemmas.

Lemma 2. ([Reference Bäuerle4, Reference Hu and Pan12].) Let $\{X_\lambda, \lambda\in \Lambda\}$ be a family of random variables, where $\Lambda$ is a subset of $\mathbb{R}$ . Let $\{X_{i,\lambda}, \lambda\in\Lambda\}$ , $i\in [n]$ , be independent copies of $\{X_\lambda, \lambda\in\Lambda\}$ . For every function $\psi\colon \mathbb{R}^n\to\mathbb{R}$ , define

\begin{align*}g(\lambda_1, \lambda_2, \ldots, \lambda_n) & =\mathbb{E}[\psi (X_{1, \lambda_1}, X_{2, \lambda_2}, \ldots, X_{n,\lambda_n})]\end{align*}

where the expectation is assumed to exist. If $\psi$ is supermodular, and $X_\lambda$ is stochastically increasing in $\lambda$ , then g is a supermodular function defined on $\Lambda^n$ .

In the following lemma, when we consider the NSMD property, we always assume that the underlying probability space $(\Omega, \mathscr{F}, \mathbb{P})$ is atomless. Below, we say that for two random vectors (variables) ${\boldsymbol{X}}$ and $\boldsymbol{\Theta}$ , ${\boldsymbol{X}}$ is said to be stochastically increasing in $\boldsymbol{\Theta}$ if $[{\boldsymbol{X}} \mid \boldsymbol{\Theta} = \boldsymbol{\theta}] \leq_\textrm{st} [{\boldsymbol{X}} \mid \boldsymbol{\Theta} = \boldsymbol{\theta}']$ holds whenever $\boldsymbol{\theta} \leq \boldsymbol{\theta}'$ .

Lemma 3. Let ${\boldsymbol{X}}^{(k)}= \bigl(X_1^{(k)}, \ldots, X_n^{(k)}\bigr)$ , $k\in [m]$ , and denote ${\boldsymbol{S}}^{(k)}= \bigl(S_1^{(k)}, \ldots, S^{(k)}_n\bigr)$ with $S^{(k)}_i=\sum^k_{\nu=1} X_i^{(\nu)}$ and $S^{(0)}_i=0$ , $i\in [n]$ . Assume that

  1. (i) for all $k\in [m]$ , $\bigl[{\boldsymbol{X}}^{(k)} \mid {\boldsymbol{S}}^{(k-1)}\bigr]$ is NA (respectively, NSMD);

  2. (ii) for all $k\in [m]$ and $I\subset [n]$ , $\bigl[{\boldsymbol{X}}_I^{(k)}\mid {\boldsymbol{S}}^{(k-1)}\bigr] \stackrel{d}{=} \bigl[{\boldsymbol{X}}_I^{(k)}\mid {\boldsymbol{S}}^{(k-1)}_I\bigr]$ ;

  3. (iii) for all $k\in [m]$ and $I\subset [n]$ , ${\boldsymbol{X}}_I^{(k)}$ is stochastically increasing in ${\boldsymbol{S}}^{(k-1)}_I$ .

Then ${\boldsymbol{S}}^{(k)}$ is NA (respectively, NSMD) for $k\in [m]$ .

Proof. First, we prove the NA property of ${\boldsymbol{S}}^{(k)}$ by induction on $k\in [m]$ . For $k=1$ , ${\boldsymbol{S}}^{(1)}={\boldsymbol{X}}^{(1)}$ is NA by assumption (i). Assume ${\boldsymbol{S}}^{(k)}$ is NA for $k\in [m]$ . Let $I_1$ and $I_2$ be two disjoint proper subsets of [n], and let $\psi_j\colon \mathbb{R}^{|I_j|}\to\mathbb{R}$ be an increasing function for $j=1, 2$ . Then

\begin{align*}& \textrm{Cov} \bigl(\psi_1\bigl({\boldsymbol{S}}^{(k+1)}_{I_1}\bigr), \psi_2\bigl({\boldsymbol{S}}^{(k+1)}_{I_2}\bigr)\bigr) \\& \quad = \textrm{Cov} \bigl(\mathbb{E} \bigl[\psi_1\bigl({\boldsymbol{X}}_{I_1}^{(k+1)}+{\boldsymbol{S}}^{(k)}_{I_1}\bigr) \mid {\boldsymbol{S}}^{(k)}\bigr], \mathbb{E}\bigl[\psi_2\bigl({\boldsymbol{X}}_{I_2}^{(k+1)}+ {\boldsymbol{S}}^{(k)}_{I_2}\bigr) \mid {\boldsymbol{S}}^{(k)}\bigr]\bigr) \\&\quad \quad + \mathbb{E} \bigl[\textrm{Cov} \bigl(\psi_1\bigl({\boldsymbol{X}}_{I_1}^{(k+1)}+{\boldsymbol{S}}^{(k)}_{I_1}\bigr), \psi_2\bigl({\boldsymbol{X}}_{I_2}^{(k+1)}+{\boldsymbol{S}}^{(k)}_{I_2}\bigr)\mid {\boldsymbol{S}}^{(k)} \bigr)\bigr] \\& \quad \le \textrm{Cov} \bigl(\mathbb{E} \bigl[\psi_1\bigl({\boldsymbol{X}}_{I_1}^{(k+1)}+{\boldsymbol{S}}^{(k)}_{I_1}\bigr) \mid {\boldsymbol{S}}^{(k)}\bigr], \mathbb{E}\bigl[\psi_2\bigl({\boldsymbol{X}}_{I_2}^{(k+1)}+ {\boldsymbol{S}}^{(k)}_{I_2}\bigr) \mid {\boldsymbol{S}}^{(k)}\bigr]\bigr)\\& \quad = \textrm{Cov} \bigl(\varphi_1\big ({\boldsymbol{S}}^{(k)}\big ), \varphi_2\big ({\boldsymbol{S}}^{(k)}\big )\bigr),\end{align*}

where the first inequality follows from assumption (i), and

\[\varphi_j\big ({\boldsymbol{S}}^{(k)}\big )=\mathbb{E}\bigl[\psi_j\bigl({\boldsymbol{X}}_{I_j}^{(k+1)}+{\boldsymbol{S}}^{(k)}_{I_j}\bigr)\mid {\boldsymbol{S}}^{(k)}\bigr], \quad j=1, 2.\]

By assumption (ii), it follows that $\varphi_j\big ({\boldsymbol{S}}^{(k)}\big )$ depends on ${\boldsymbol{S}}_{I_j}^{(k)}$ only, that is,

\[\varphi_j\big ({\boldsymbol{S}}^{(k)}\big )=\mathbb{E}\bigl[\psi_j\bigl({\boldsymbol{X}}_{I_j}^{(k+1)}+{\boldsymbol{S}}^{(k)}_{I_j}\bigr)\mid {\boldsymbol{S}}_{I_j}^{(k)}\bigr] \stackrel{\textrm{def}}{=} \varphi_j^\ast\big ({\boldsymbol{S}}_{I_j}^{(k)}\big ), \quad j=1, 2.\]

By assumption (iii), $\varphi_j^\ast\big ({\boldsymbol{s}}_{I_j}\big )$ is increasing in ${\boldsymbol{s}}_{I_j}$ . So we have

\[\textrm{Cov} \bigl(\psi_1\bigl({\boldsymbol{S}}^{(k+1)}_{I_1}\bigr), \psi_2\bigl({\boldsymbol{S}}^{(k+1)}_{I_2}\bigr)\bigr)=\textrm{Cov} \bigl(\varphi_1^\ast\big ({\boldsymbol{S}}_{I_1}^{(k)}\big ), \varphi_2^\ast\big ({\boldsymbol{S}}_{I_2}^{(k)}\big )\bigr)\le 0 ,\]

by the induction assumption that ${\boldsymbol{S}}^{(k)}$ is NA. This means that ${\boldsymbol{S}}^{(k+1)}$ is NA. Therefore we prove the NA property of ${\boldsymbol{S}}^{(k)}$ by induction.

Next, we prove the NSMD property of ${\boldsymbol{S}}^{(k)}$ by induction on $k\in [m]$ . For $k=1$ , ${\boldsymbol{S}}^{(1)}={\boldsymbol{X}}^{(1)}$ is NSMD by assumption (i). Assume ${\boldsymbol{S}}^{(k)}$ is NSMD for $k\in [m]$ . Let $\psi\colon \mathbb{R}^n\to\mathbb{R}$ be a supmodular function. Since the underlying probability space is atomless, by assumptions (i) and (ii), we have

\begin{align*}\mathbb{E} \bigl[\psi\bigl({\boldsymbol{S}}^{(k+1)}\bigr)\bigr] & = \mathbb{E}\bigl\{\mathbb{E} \bigl[\psi\bigl({\boldsymbol{X}}^{(k+1)}+{\boldsymbol{S}}^{(k)}\bigr) \mid {\boldsymbol{S}}^{(k)}\bigr] \bigr\} \\& \le \mathbb{E}\bigl\{\mathbb{E} \bigl[\psi\bigl({{\boldsymbol{Y}}}({\boldsymbol{S}}^{(k)} ) +{\boldsymbol{S}}^{(k)}\bigr) \mid {\boldsymbol{S}}^{(k)}\bigr] \bigr\}\\&= \mathbb{E} \bigl[\varphi\bigl({\boldsymbol{S}}^{(k)}\bigr) \bigr],\end{align*}

where $\varphi({\boldsymbol{s}})= \mathbb{E} \bigl[\psi\bigl({{\boldsymbol{Y}}}({\boldsymbol{s}}) +{\boldsymbol{s}}\bigr)\bigr]$ for ${\boldsymbol{s}}\in \mathbb{R}^n$ , and ${\boldsymbol{Y}}({\boldsymbol{s}})=(Y_1(s_1), \ldots, Y_n(s_n))$ is a vector of independent random variables, independent of all other random variables, such that

\[Y_i(x) \stackrel {d}{=} \bigl[X_i^{(k+1)}\mid S^{(k)}_i=x\bigr]\quad\text{for $i\in [n]$ and $x\in\mathbb{R}$.}\]

By assumption (iii) and Lemma 2, we have

\[\varphi({\boldsymbol{s}})= \mathbb{E} [\psi(Y_1(s_1) +s_1, \ldots, Y_n(s_n)+s_n)]\]

is also supermodular in ${\boldsymbol{s}}\in\mathbb{R}^n$ . By the induction assumption that ${\boldsymbol{S}}^{(k)}$ is NSMD, there exists ${\boldsymbol{S}}^{(k)*}=\bigl(S^{(k)*}_1, \ldots, S^{(k)*}_n\bigr)$ of independent random variables such that $S^{(k)*}_i \stackrel {d}{=} S^{(k)}_i$ for $i\in [n]$ and

\[\mathbb{E} \bigl[\varphi\bigl({\boldsymbol{S}}^{(k)}\bigr) \bigr] \le \mathbb{E} \bigl[\varphi\bigl({\boldsymbol{S}}^{(k)*}\bigr) \bigr].\]

Define ${\boldsymbol{S}}^{(k+1)*} ={\boldsymbol{Y}}\bigl({\boldsymbol{S}}^{(k)*}\bigr)+ {\boldsymbol{S}}^{(k)*}$ . Then the components of ${\boldsymbol{S}}^{(k+1)*}$ are independent, $S_i^{(k+1)*} \stackrel {d}{=} S_i^{(k+1)}$ for $i\in [m]$ , and

\[\mathbb{E} \bigl[\psi\bigl({\boldsymbol{S}}^{(k+1)}\bigr)\bigr] \le \mathbb{E} \bigl[\varphi\bigl({\boldsymbol{S}}^{(k)*}\bigr) \bigr]= \mathbb{E} \bigl[\psi\bigl({{\boldsymbol{Y}}}\bigl({\boldsymbol{S}}^{(k)*} \bigr) +{\boldsymbol{S}}^{(k)*}\bigr)\bigr] =\mathbb{E} \bigl[\psi\bigl({\boldsymbol{S}}^{(k+1)*}\bigr)\bigr],\]

implying that ${\boldsymbol{S}}^{(k+1)}$ is NSMD. Therefore the desired result follows by induction.

Theorem 2. Consider a knockout tournament with $n=2^\ell$ players of equal strength, where $\ell\ge 2$ . If the schedule of matches is deterministic, then ${\boldsymbol{S}}$ is NA and hence NSMD.

Proof. It suffices to prove ${\boldsymbol{S}}$ is NA since NA implies NSMD. By a similar argument to that in the proof of Proposition 3 in [Reference Malinovsky and Rinott19], without loss of generality, assume that in the first round player $2i-1$ plays against player 2i for $i\in [n/2]$ . For $i\in [n]$ , denote

\[X_i^{(1)}=\begin{cases} 1, & \hbox{if player ${i}$ wins the first round}, \\0, & \hbox{if player ${i}$ loses the first round}. \end{cases}\]

Then the pairs $\big (X_{2i-1}^{(1)}, X^{(1)}_{2i}\big )$ , $i\in [n/2]$ , are independent and NA. By Property $\textrm{ P}_7$ in [Reference Joag-dev and Proschan15], it follows that ${\boldsymbol{X}}^{(1)}=\big (X_1^{(1)}, \ldots, X^{(1)}_n\big )$ is NA. For $k\ge 2$ , define

\[X_i^{(k)}=\begin{cases}1, & \hbox{if player ${i}$ wins the ${k}$th round}, \\0, & \hbox{otherwise}, \end{cases}\]

and $S^{(k)}_i=\sum^k_{j=1} X_i^{(j)}$ for $i\in [n]$ . Note that if $X_i^{(k-1)}=0$ then $X^{(k)}_i=0$ . Obviously, assumptions (i) and (iii) of Lemma 3 are seen to hold. Given ${\boldsymbol{S}}^{(k-1)}$ , among all players $I\subset [n]$ , only players

\[\bigl\{i\in I\colon S^{(k-1)}_i=k-1\bigr\}\]

move to the kth round, and their scores are not affected by ${\boldsymbol{S}}^{(k-1)}_{[n]\backslash I}$ since the schedule of matches is deterministic. Thus assumption (ii) of Lemma 3 is satisfied. Therefore the NA property of ${\boldsymbol{S}}$ follows from Lemma 3.

Motivated by Example 4, we have the next theorem concerning the NRTD property of ${\boldsymbol{S}}$ in a knockout tournament with a deterministic draw and players having equal strength.

Theorem 3. Consider a knockout tournament with $n=2^\ell$ players of equal strength, where $\ell\ge 2$ . If the schedule of matches is deterministic, then ${\boldsymbol{S}}$ is NRTD.

Proof. Since the schedule of matches is deterministic, without loss of generality, assume that in the first round, player $2^k-1$ plays against player $2^k$ for each $k\in [\ell]$ ; in the second round, the winner between player 1 and 2 plays against the winner between player 3 and 4, the winner between player 5 and 6 plays against the winner between player 7 and 8, and so on. In the next rounds, all matches are arranged in a similar way. To prove that ${\boldsymbol{S}}$ is NRTD, it suffices to prove that, for any non-empty set $I\varsubsetneq [n]$ and an increasing function $\psi\colon \mathbb{R}^{n-|I|}\to \mathbb{R}$ ,

(15) \begin{equation}\mathbb{E}\bigl[\psi\bigl({\boldsymbol{S}}_{[n]\backslash I}\bigr) \mid S_{i_0}\ge h-1, {\boldsymbol{S}}_{I\backslash\{i_0\}} \ge {\boldsymbol{s}}_{I\backslash\{i_0\}}\bigr] \ge \mathbb{E}\bigl[\psi\bigl({\boldsymbol{S}}_{[n]\backslash I}\bigr)\mid S_{i_0}\ge h, {\boldsymbol{S}}_{I\backslash\{i_0\}} \ge {\boldsymbol{s}}_{I\backslash\{i_0\}}\bigr]\end{equation}

for each $i_0\in I$ , where $h\in [\ell]$ and ${\boldsymbol{s}}_{I\backslash\{i_0\}}$ satisfy

\begin{equation*} \mathbb{P}\bigl(S_{i_0}\ge h-1, {\boldsymbol{S}}_{I\backslash\{i_0\}} \ge {\boldsymbol{s}}_{I\backslash\{i_0\}}\bigr)\ge \mathbb{P}\bigl(S_{i_0}\ge h, {\boldsymbol{S}}_{I\backslash\{i_0\}} \ge {\boldsymbol{s}}_{I\backslash\{i_0\}}\bigr)>0.\end{equation*}

Without loss of generality, assume $i_0=1$ . From the second strict inequality, we know that $s_i\le h-1$ for all $i\in [2^h]\cap (I\backslash \{1\})$ .

Define $K=[2^h]\cap ([n]\backslash I)$ and $J=([n]\backslash [2^h])\cap ([n]\backslash I)= [n]\backslash (I\cup K)$ , and denote

\begin{align*}E_0 & =\bigl\{S_1= h-1, {\boldsymbol{S}}_{I\backslash\{1\}} \ge {\boldsymbol{s}}_{I\backslash\{1\}}\bigr\},\\E_1 & =\bigl\{S_1\ge h-1, {\boldsymbol{S}}_{I\backslash\{1\}} \ge {\boldsymbol{s}}_{I\backslash\{1\}}\bigr\},\\E_2 & =\bigl\{S_1\ge h, {\boldsymbol{S}}_{I\backslash\{1\}} \ge {\boldsymbol{s}}_{I\backslash\{1\}}\bigr\}.\end{align*}

Obviously, $E_1=E_0\cup E_2$ and $E_0 \cap E_2=\emptyset$ . From the specified schedule of matches, it is known that the outcomes in the first h rounds for players in $[2^h]$ do not change the distribution of ${\boldsymbol{S}}_{[n]\backslash [2^h]}$ because only one winner among the first $2^h$ players will play against one player $k\in [n]\backslash [2^h]$ . Then

(16) \begin{equation}[{\boldsymbol{S}}_J \mid E_0] \stackrel {d}{=} [{\boldsymbol{S}}_J \mid E_2].\end{equation}

Next, we prove that

(17) \begin{equation}[{\boldsymbol{S}}_K \mid E_2, {\boldsymbol{S}}_J={\boldsymbol{s}}_J] \le_\textrm{st} [{\boldsymbol{S}}_K \mid E_0, {\boldsymbol{S}}_J={\boldsymbol{s}}_J]\end{equation}

for all possible choices of ${\boldsymbol{s}}_J$ . To simplify notations, define

\begin{align*}{\boldsymbol{Y}}_K & =[{\boldsymbol{S}}_K \mid E_0, {\boldsymbol{S}}_J={\boldsymbol{s}}_J], \quad{\boldsymbol{Z}}_K =[{\boldsymbol{S}}_K \mid E_2, {\boldsymbol{S}}_J={\boldsymbol{s}}_J].\end{align*}

Since the event $\{S_1\ge h\}$ means that player 1 beats all players $i\in [2^h]$ , it follows that $Z_k \le h-1$ for each $k\in K$ . To prove (17), let $\phi\colon \mathbb{R}^{|K|}\to\mathbb{R}$ be an increasing function. First, note that

(18) \begin{equation}\mathbb{P}({\boldsymbol{Z}}_K ={\boldsymbol{s}}_K) = \mathbb{P} ({\boldsymbol{Y}}_K={\boldsymbol{s}}_K)\end{equation}

whenever $s_k<h-1$ for all $k\in K$ because players $k\in K$ were knocked out in the first $h-1$ round. In view of (18), we have

\begin{align*}\mathbb{E} [\phi({\boldsymbol{Y}}_K)] & =\mathbb{E} [\phi ({\boldsymbol{Y}}_K)\cdot 1_{\{Y_k < h-1, k\in K\}} ] + \sum_{k\in K} \mathbb{E} [\phi ({\boldsymbol{Y}}_K)\cdot 1_{\{Y_j < h-1, j\in K\backslash\{k\}\}} \cdot 1_{\{Y_k\ge h-1\}}]\\ & \ge \mathbb{E} [\phi ({\boldsymbol{Z}}_K)\cdot 1_{\{Z_k < h-1, k\in K\}} ] + \sum_{k\in K} \mathbb{E} [\phi (h-1, {\boldsymbol{Y}}_{K\backslash \{k\}}) \cdot 1_{\{Y_j < h-1, j\in K\backslash\{k\}\}} ]\\& = \mathbb{E} [\phi ({\boldsymbol{Z}}_K)\cdot 1_{\{Z_k < h-1, k\in K\}} ] + \sum_{k\in K} \mathbb{E} [\phi (h-1, {\boldsymbol{Z}}_{K\backslash \{k\}}) \cdot 1_{\{Z_j < h-1, j\in K\backslash\{k\}\}} ]\\& = \mathbb{E} [\phi ({\boldsymbol{Z}}_K)\cdot 1_{\{Z_k < h-1, k\in K\}} ] + \sum_{k\in K} \mathbb{E} [\phi ({\boldsymbol{Z}}_K) \cdot 1_{\{Z_j < h-1, j\in K\backslash\{k\}\}} \cdot 1_{\{Z_k= h-1\}}]\\& = \mathbb{E} [\phi({\boldsymbol{Z}}_K)],\end{align*}

which implies ${\boldsymbol{Z}}_K\le_\textrm{st} {\boldsymbol{Y}}_K$ , that is, (17).

Next, we turn to proving (15). Note that

\begin{align*}\mathbb{E}\bigl[\psi\bigl({\boldsymbol{S}}_{[n]\backslash I}\bigr) \mid E_1\bigr]& =\dfrac { \mathbb{E}[\psi({\boldsymbol{S}}_J, {\boldsymbol{S}}_K)\cdot 1_{E_1}] }{\mathbb{P}(E_1)}\stackrel{\textrm{def}}{=} \dfrac {\eta_3 +\eta_4}{\eta_1+\eta_2},\end{align*}

where $\eta_1= \mathbb{P}(E_0)$ , $\eta_2 =\mathbb{P}(E_2)$ , and

\begin{align*}\eta_3 &=\mathbb{E}[\psi({\boldsymbol{S}}_J,{\boldsymbol{S}}_K)\cdot 1_{E_0}],\quad \eta_4 =\mathbb{E}[\psi({\boldsymbol{S}}_J,{\boldsymbol{S}}_K)\cdot 1_{E_2}].\end{align*}

On the other hand, given $S_1\ge h-1$ and ${\boldsymbol{S}}_{I\backslash\{1\}} \ge {\boldsymbol{s}}_{I\backslash\{1\}}$ , player 1 will play a match with another player from $[2^h]$ in the hth round, and thus the events $\{S_1=h-1\}$ and $\{S_1\ge h\}$ occur respectively with probabilities $1/2$ since he wins and loses in this round with probability $1/2$ . So we have

\[\eta_1=\mathbb{P}(S_1=h-1 \mid E_1) \cdot \mathbb{P}(E_1) =\mathbb{P}(S_1\ge h \mid E_1) \cdot \mathbb{P}(E_1) = \mathbb{P}(E_2)= \eta_2.\]

Also, by (16) and (17), we have

\begin{align*}\eta_3 &=\sum_{{\boldsymbol{s}}_J}\mathbb{E}\bigl[\psi({\boldsymbol{S}}_J, {\boldsymbol{S}}_K)\cdot 1_{\{{\boldsymbol{S}}_J={\boldsymbol{s}}_J, E_0\}}\bigr]\\&= \eta_1 \sum_{{\boldsymbol{s}}_J} \mathbb{E}[\psi({\boldsymbol{s}}_J, {\boldsymbol{S}}_K) \mid {\boldsymbol{S}}_J={\boldsymbol{s}}_J, E_0 ]\cdot \mathbb{P}({\boldsymbol{S}}_J={\boldsymbol{s}}_J \mid E_0 )\\& \ge \eta_2 \sum_{{\boldsymbol{s}}_J} \mathbb{E}[\psi({\boldsymbol{s}}_J, {\boldsymbol{S}}_K) \mid {\boldsymbol{S}}_J={\boldsymbol{s}}_J, E_2 ]\cdot \mathbb{P}({\boldsymbol{S}}_J={\boldsymbol{s}}_J \mid E_2 )\\&= \sum_{{\boldsymbol{s}}_J} \mathbb{E}\bigl[\psi({\boldsymbol{S}}_J, {\boldsymbol{S}}_K)\cdot 1_{\{{\boldsymbol{S}}_J={\boldsymbol{s}}_J, E_2\}}\bigr] \\&= \eta_4.\end{align*}

Therefore

\[\mathbb{E}\bigl[\psi\bigl({\boldsymbol{S}}_{[n]\backslash I}\bigr) \mid E_1\bigr] =\dfrac {\eta_3+\eta_4}{\eta_1+\eta_2}\ge \dfrac {\eta_4}{\eta_2} \ge \mathbb{E}\bigl[\psi\bigl({\boldsymbol{S}}_{[n]\backslash I}\bigr) \mid E_2\bigr].\]

This proves (15).

Remark 1. We give an intuitive interpretation of the NRTD result in Theorem 3, which can be regarded as a less rigorous proof. Observe that, given $S_k\ge h$ for some $k\in [n]$ , it means that player k won in the first h rounds, and no other deterministic information about his score in the next $\ell-h$ rounds can be inferred. In the proof of Theorem 3, define

\begin{align*}{\boldsymbol{U}}_{[n]\backslash I} & =\bigl[{\boldsymbol{S}}_{[n]\backslash I}\mid E_1\bigr]=\bigl[{\boldsymbol{S}}_{[n]\backslash I}\mid E_0\cup E_2\bigr],\quad {\boldsymbol{V}}_{[n]\backslash I} = \bigl[{\boldsymbol{S}}_{[n]\backslash I}\mid E_2\bigr].\end{align*}

We compare $E_1$ and $E_2$ . The difference between $E_1$ and $E_2$ is that player $i_0$ won the first h rounds for $E_2$ while player $i_0$ just won the first $h-1$ rounds for $E_1$ . Intuitively, ${\boldsymbol{S}}_{[n]\backslash I}$ given $E_0$ tends to take larger values than given $E_2$ . Thus ${\boldsymbol{U}}_{[n]\backslash I}$ is stochastically larger than ${\boldsymbol{V}}_{[n]\backslash I}$ .

Acknowledgements

The authors are grateful to the Associate Editor and four anonymous referees for their comprehensive reviews of an earlier version of this paper.

Funding information

Z. Zou is supported by the National Natural Science Foundation of China (no. 12401625), the China Postdoctoral Science Foundation (no. 2024M753074), the Postdoctoral Fellowship Program of CPSF (GZC20232556), and the Fundamental Research Funds for the Central Universities (no. WK2040000108). T. Hu would like to acknowledge financial support from the National Natural Science Foundation of China (nos. 72332007, 12371476).

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Adler, I., Cao, Y., Karp, R., Peköz, E. A. and Ross, S. M. (2017). Random knockout tournaments. Operat. Res. 65, 15891596.10.1287/opre.2017.1657CrossRefGoogle Scholar
Alam, K. and Saxena, K. M. L. (1981). Positive dependence in multivariate distributions. Commun. Statist. Theory Meth. 10, 11831196.Google Scholar
Barlow, R. E. and Proschan, F. (1981). Statistical Theory of Reliability and Life Testing. To Begin With, Silver Spring, MD (first printed in 1975).Google Scholar
Bäuerle, N. (1997). Monotonicity results for MR/GI/1 queues. J. Appl. Prob. 34, 514524.10.2307/3215390CrossRefGoogle Scholar
Block, H. W., Savits, T. H. and Shaked, M. (1985). A concept of negative dependence through stochastic ordering. Statist. Prob. Lett. 3, 8186.10.1016/0167-7152(85)90029-XCrossRefGoogle Scholar
Bruss, F. T. and Ferguson, T. S. (2018). Testing equality of players in a round-robin tournament. Math. Sci. 43, 125136.Google Scholar
Chen, Y., Embrechts, P. and Wang, R. (2025). An unexpected stochastic dominance: Pareto distributions, dependence, and diversification. Operat. Res. 73, 13361344.10.1287/opre.2022.0505CrossRefGoogle Scholar
Cheung, K. C. and Lo, A. (2014). Characterizing mutual exclusivity as the strongest negative multivariate dependence structure. Insurance Math. Econom. 55, 180190.10.1016/j.insmatheco.2014.01.001CrossRefGoogle Scholar
Christofides, T. C. and Vaggelatou, E. (2004). A connection between supermodular ordering and positive/negative association. J. Multivariate Anal. 88, 138151.10.1016/S0047-259X(03)00064-2CrossRefGoogle Scholar
Dubhashi, D. and Ranjan, D. (1998). Balls and bins: A study in negative dependence. Random Structures Algorithms 13, 99124.10.1002/(SICI)1098-2418(199809)13:2<99::AID-RSA1>3.0.CO;2-M3.0.CO;2-M>CrossRefGoogle Scholar
Hu, T. (2000). Negatively superadditive dependence of random variables with applications. Chinese J. Appl. Prob. Statist. 16, 133144.Google Scholar
Hu, T. and Pan, X. (1999). Preservation of multivariate dependence under multivariate claim models. Insurance Math. Econom. 25, 171179.10.1016/S0167-6687(99)00032-3CrossRefGoogle Scholar
Hu, T. and Xie, C. (2006). Negative dependence in the balls and bins experiment with applications to order statistics. J. Multivariate Anal. 97, 13421354.10.1016/j.jmva.2005.09.008CrossRefGoogle Scholar
Hu, T. and Yang, J. (2004). Further developments on sufficient conditions for negative dependence of random variables. Statist. Prob. Lett. 66, 369381.10.1016/j.spl.2003.10.023CrossRefGoogle Scholar
Joag-dev, K. and Proschan, F. (1983). Negative association of random variables, with applications. Ann. Statist. 11, 286295.10.1214/aos/1176346079CrossRefGoogle Scholar
Karlin, S. and Rinott, Y. (1980). Classes of orderings of measures and related correlation inequalities, II: Multivariate reverse rule distributions. J. Multivariate Anal. 10, 499516.10.1016/0047-259X(80)90066-4CrossRefGoogle Scholar
Lauzier, J.-G., Lin, L. and Wang, R. (2023). Pairwise counter-monotonicity. Insurance Math. Econom. 111, 279287.10.1016/j.insmatheco.2023.05.006CrossRefGoogle Scholar
Malinovsky, Y. and Moon, J. W. (2022). On the negative dependence inequalities and maximal score in round-robin tournaments. Statist. Prob. Lett. 185, 109432.10.1016/j.spl.2022.109432CrossRefGoogle Scholar
Malinovsky, Y. and Rinott, Y. (2023). On tournaments and negative dependence. J. Appl. Prob. 60, 945954.10.1017/jpr.2022.104CrossRefGoogle Scholar
Moon, J. W. (2013). Topics on Tournaments. Available at https://www.gutenberg.org/ebooks/42833.Google Scholar
Puccetti, G. and Wang, R. (2015). Extremal dependence concepts. Statist. Sci. 30, 485517.10.1214/15-STS525CrossRefGoogle Scholar
Ross, S. M. (2022). Team’s seasonal win probabilities. Prob. Eng. Inf. Sci. 36, 988998.10.1017/S026996482100019XCrossRefGoogle Scholar
Shaked, M. and Shanthikumar, J. G. (2007). Stochastic Orders. Springer, New York.10.1007/978-0-387-34675-5CrossRefGoogle Scholar
Wang, B. and Wang, R. (2016). Joint mixability. Math. Operat. Res. 41, 808826.10.1287/moor.2015.0755CrossRefGoogle Scholar
Figure 0

Table 1. Probability mass function of ${\boldsymbol{S}}$.