1. Introduction
 A r-out-of-n system will function if at least r of the n components are functioning. This includes parallel, fail-safe and series systems, corresponding to r = 1,  $r=n-1$, and r = n, respectively. We denote the lifetimes of the components by
$r=n-1$, and r = n, respectively. We denote the lifetimes of the components by  $X_1, \ldots, X_n$, and the corresponding order statistics by
$X_1, \ldots, X_n$, and the corresponding order statistics by  $X_{1:n}\le \cdots\le X_{n:n}$. Then, the lifetime of an r-out-of-n system is given by
$X_{1:n}\le \cdots\le X_{n:n}$. Then, the lifetime of an r-out-of-n system is given by  $X_{n-r+1:n}$ and so, the theory of order statistics has been used extensively to study the properties of
$X_{n-r+1:n}$ and so, the theory of order statistics has been used extensively to study the properties of  $(n-r+1)$-out-of-n systems. For detailed information on order statistics and their applications, interested readers may refer to [Reference Arnold, Balakrishnan and Nagaraja1, Reference Balakrishnan and Rao3, Reference Balakrishnan and Rao4].
$(n-r+1)$-out-of-n systems. For detailed information on order statistics and their applications, interested readers may refer to [Reference Arnold, Balakrishnan and Nagaraja1, Reference Balakrishnan and Rao3, Reference Balakrishnan and Rao4].
The Weibull distribution has been used in a wide variety of areas, ranging from engineering to finance. Numerous works have been conducted to further explore and analyze properties and different uses of the Weibull distribution. These works further highlight the broad applicability of the Weibull distribution and its potential for use in many different fields, see, for example, [Reference Lim, McDowell and Collop11, Reference Mudholkar, Srivastava and Kollia15].
 A flexible family of statistical models is frequently required for data analysis to achieve flexibility while modeling real-life data. Several techniques have been devised to enhance the malleability of a given statistical distribution. One approach is to leverage already well-studied classic distributions, such as gamma, Weibull and log-normal. Alternatively, one can increase the flexibility of a distribution by including an additional shape parameter; for instance, the Weibull distribution is generated by taking powers of exponentially distributed random variables. Another popular strategy for achieving this objective, as proposed by [Reference Marshall and Olkin12], is to add an extra parameter to any distribution function, resulting in a new family of distributions. To be specific, let G(x) and  $\bar{G}(x) = 1-G(x)$ be the distribution and survival functions of a baseline distribution, respectively. We assume that the distributions have nonnegative support. Then, it is easy to verify that
$\bar{G}(x) = 1-G(x)$ be the distribution and survival functions of a baseline distribution, respectively. We assume that the distributions have nonnegative support. Then, it is easy to verify that
 \begin{equation}
F(x;\alpha)=\frac{G(x)}{\displaystyle
1-\bar{\alpha}\,\bar{G}(x)},\qquad x,~ \alpha\in(0,\infty),\,
\bar{\alpha}=1-\alpha,
\end{equation}
\begin{equation}
F(x;\alpha)=\frac{G(x)}{\displaystyle
1-\bar{\alpha}\,\bar{G}(x)},\qquad x,~ \alpha\in(0,\infty),\,
\bar{\alpha}=1-\alpha,
\end{equation}and
 \begin{equation}
F(x;\alpha)=\frac{\alpha\,G(x)}{\displaystyle
1-\bar{\alpha}\,G(x)},\qquad x,~ \alpha\in(0,\infty),\,
\bar{\alpha}=1-\alpha,
\end{equation}
\begin{equation}
F(x;\alpha)=\frac{\alpha\,G(x)}{\displaystyle
1-\bar{\alpha}\,G(x)},\qquad x,~ \alpha\in(0,\infty),\,
\bar{\alpha}=1-\alpha,
\end{equation} are both valid cumulative distribution functions (CDFs). Here, the newly added parameter α is referred to as the tilt parameter. When G(x) has probability density and hazard rate functions as g(x) and  $r_G(x)$, respectively, then the hazard rate function of
$r_G(x)$, respectively, then the hazard rate function of  $F(x;\alpha)$ in (1.1) is seen to be
$F(x;\alpha)$ in (1.1) is seen to be
 \begin{equation}
r_{F}(x;\alpha)=\frac{\displaystyle 1}{\displaystyle 1-\bar{\alpha}\,\bar{G}(x)}\,r_{G}(x),\qquad x,~ \alpha\in(0,\infty),\, \bar{\alpha}=1-\alpha.
\end{equation}
\begin{equation}
r_{F}(x;\alpha)=\frac{\displaystyle 1}{\displaystyle 1-\bar{\alpha}\,\bar{G}(x)}\,r_{G}(x),\qquad x,~ \alpha\in(0,\infty),\, \bar{\alpha}=1-\alpha.
\end{equation} Thus, if  $r_{G}(x)$ is decreasing (increasing) in x, then for
$r_{G}(x)$ is decreasing (increasing) in x, then for  $0 \lt \alpha\le 1\,(\alpha\ge 1)$,
$0 \lt \alpha\le 1\,(\alpha\ge 1)$,  $r_{F}(x;\alpha)$ is also decreasing (increasing) in x. This method has been used by different authors to introduce new extended family of distributions, see, for example, [Reference Mudholkar and Srivastava16].
$r_{F}(x;\alpha)$ is also decreasing (increasing) in x. This method has been used by different authors to introduce new extended family of distributions, see, for example, [Reference Mudholkar and Srivastava16].
Comparison of two order statistics stochastically has been studied rather extensively, and especially the comparison of various characteristics of lifetimes of different systems having Weibull components, based on different stochastic orderings. For example, one may see [Reference Fang, Ling and Balakrishnan6–Reference Khaledi and Kochar9, Reference Torrado20–Reference Zhao, Zhang and Qiao22], and the references therein, for stochastic comparisons of series and parallel systems with heterogeneous components with various lifetime distributions. The majority of existing research on the comparison of series and parallel systems has only considered the case of components that are all independent. However, the operating environment of such technical systems is often subject to a range of factors, such as operating conditions, environmental conditions and the stress factors on the components. For this reason, it would be prudent to take into account the dependence of the lifetimes of components. There are various methods to model this dependence, with the theory of copulas being a popular tool; for example [Reference Nelsen17] provides a comprehensive account of copulas. Archimedean copulas are a type of multivariate probability distributions used to model the dependence between random variables. They are frequently used in financial applications, such as insurance, risk modeling and portfolio optimization. Many researchers have given consideration to the Archimedean copula due to its flexibility, as it includes the renowned Clayton copula, Ali–Mikhail–Haq copula and Gumbel–Hougaard copula. Moreover, it also incorporates the independence copula as a special case. As such, results of comparison established under an Archimedean copula for the joint distribution of components’ lifespans in a system are general and would naturally include the corresponding results for the case of independent components.
 In this article, we consider the following family of distributions known as extended Weibull family of distributions, with  $G(x)=1-e^{-(x\lambda)^{k}},~x,~\lambda,~k \gt 0,$ as the baseline distribution in (1.1). The distribution function of the extended Weibull family is then given by
$G(x)=1-e^{-(x\lambda)^{k}},~x,~\lambda,~k \gt 0,$ as the baseline distribution in (1.1). The distribution function of the extended Weibull family is then given by
 \begin{equation}
F_{X}(x)=\frac{1-e^{-(x\lambda)^{k}}}{1-\bar{\alpha}e^{-(x\lambda)^k}},~x,~k,~\alpha \gt 0.
\end{equation}
\begin{equation}
F_{X}(x)=\frac{1-e^{-(x\lambda)^{k}}}{1-\bar{\alpha}e^{-(x\lambda)^k}},~x,~k,~\alpha \gt 0.
\end{equation} We denote this variable by  $X\sim EW(\alpha,\lambda,k),$ where α, λ and k are respectively known as tilt, scale and shape parameters. In (1.4), if we take α = 1 and k = 1, then the extended Weibull family of distributions reduces to the Weibull family of distributions and the extended exponential family of distributions (see [Reference Barmalzan, Ayat, Balakrishnan and Roozegar5]), respectively. Similarly, if we take both α = 1 and k = 1, the extended Weibull family of distributions reduces to the exponential family of distributions. Now, let us consider two sets of dependent variables
$X\sim EW(\alpha,\lambda,k),$ where α, λ and k are respectively known as tilt, scale and shape parameters. In (1.4), if we take α = 1 and k = 1, then the extended Weibull family of distributions reduces to the Weibull family of distributions and the extended exponential family of distributions (see [Reference Barmalzan, Ayat, Balakrishnan and Roozegar5]), respectively. Similarly, if we take both α = 1 and k = 1, the extended Weibull family of distributions reduces to the exponential family of distributions. Now, let us consider two sets of dependent variables  $\{X_{1},
\ldots, X_{n}\}$ and
$\{X_{1},
\ldots, X_{n}\}$ and  $\{Y_{1},
\ldots, Y_{n}\},$ where for
$\{Y_{1},
\ldots, Y_{n}\},$ where for  $i=1,\ldots,n,$
$i=1,\ldots,n,$  $X_{i}\sim EW(\alpha_{i},\lambda_{i},k_{i})$ and
$X_{i}\sim EW(\alpha_{i},\lambda_{i},k_{i})$ and  $Y_{i}\sim EW(\beta_{i},\mu_{i},l_{i})$ are combined with Archimedean (survival) copula having different generators. We then establish here different ordering results between two series and parallel systems, where the systems’ components follow extended Weibull family of distributions. The obtained results are based on the usual stochastic, star, Lorenz and dispersive orders.
$Y_{i}\sim EW(\beta_{i},\mu_{i},l_{i})$ are combined with Archimedean (survival) copula having different generators. We then establish here different ordering results between two series and parallel systems, where the systems’ components follow extended Weibull family of distributions. The obtained results are based on the usual stochastic, star, Lorenz and dispersive orders.
The rest of this paper is organized as follows. In Section 2, we recall some basic stochastic orders and some important lemmas. The main results are presented in Section 3. The ordering results between two extreme order statistics are established in Section 3.1 when the number of variables in the two sets of observations are the same and that the dependent extended Weibull family of distributions have Archimedean (survival) copulas. Here, the ordering results are based on the usual stochastic, star, Lorenz, hazard rate, reversed hazard rate and dispersive orders. Finally, Section 4 presents a brief summary of the work.
 Here, we focus on random variables which are defined on  $(0,\infty)$ representing lifetimes. The terms “increasing” and “decreasing” are used in the nonstrict sense. Also, “
$(0,\infty)$ representing lifetimes. The terms “increasing” and “decreasing” are used in the nonstrict sense. Also, “ $\overset{sign}{=}$” is used to denote that both sides of an equality have the same sign.
$\overset{sign}{=}$” is used to denote that both sides of an equality have the same sign.
2. Preliminaries
 In this section, we review some important definitions and well-known concepts of stochastic order and majorization which are most pertinent to ensuing discussions. Let  $\boldsymbol{c} =
\left(c_{1},\ldots,c_{n}\right)$ and
$\boldsymbol{c} =
\left(c_{1},\ldots,c_{n}\right)$ and  $\boldsymbol{d} =
\left(d_{1},\ldots,d_{n}\right)$ to be two n dimensional vectors such that
$\boldsymbol{d} =
\left(d_{1},\ldots,d_{n}\right)$ to be two n dimensional vectors such that  $\boldsymbol{c}~,\boldsymbol{d}\in\mathbb{A}.$ Here,
$\boldsymbol{c}~,\boldsymbol{d}\in\mathbb{A}.$ Here,  $\mathbb{A} \subset \mathbb{R}^{n}$ and
$\mathbb{A} \subset \mathbb{R}^{n}$ and  $\mathbb{R}^{n}$ is an n-dimensional Euclidean space. Also, consider the order of the elements of the vectors c and d to be
$\mathbb{R}^{n}$ is an n-dimensional Euclidean space. Also, consider the order of the elements of the vectors c and d to be  $c_{1:n}\leq \cdots \leq c_{n:n}$ and
$c_{1:n}\leq \cdots \leq c_{n:n}$ and  $d_{1:n}\leq\cdots \leq d_{n:n},$ respectively.
$d_{1:n}\leq\cdots \leq d_{n:n},$ respectively.
Definition 2.1. A vector c is said to be
- • majorized by another vector d (denoted by  $\boldsymbol{c}\preceq^{m} \boldsymbol{d}$) if, for each $\boldsymbol{c}\preceq^{m} \boldsymbol{d}$) if, for each $l=1,\ldots,n-1$, we have $l=1,\ldots,n-1$, we have $\sum_{i=1}^{l}c_{i:n}\geq \sum_{i=1}^{l}d_{i:n}$ and $\sum_{i=1}^{l}c_{i:n}\geq \sum_{i=1}^{l}d_{i:n}$ and $\sum_{i=1}^{n}c_{i:n}=\sum_{i=1}^{n}d_{i:n};$ $\sum_{i=1}^{n}c_{i:n}=\sum_{i=1}^{n}d_{i:n};$
- • weakly submajorized by another vector d (denoted by  $\boldsymbol{c}\preceq_{w} \boldsymbol{d}$) if, for each $\boldsymbol{c}\preceq_{w} \boldsymbol{d}$) if, for each $l=1,\ldots,n$, we have $l=1,\ldots,n$, we have $\sum_{i=l}^{n}c_{i:n}\leq \sum_{i=l}^{n}d_{i:n};$ $\sum_{i=l}^{n}c_{i:n}\leq \sum_{i=l}^{n}d_{i:n};$
- • weakly supermajorized by another vector  $\boldsymbol{d},$ denoted by $\boldsymbol{d},$ denoted by $\boldsymbol{c}\preceq^{w} \boldsymbol{d}$, if for each $\boldsymbol{c}\preceq^{w} \boldsymbol{d}$, if for each $l=1,\ldots,n$, we have $l=1,\ldots,n$, we have $\sum_{i=1}^{l}c_{i:n}\geq \sum_{i=1}^{l}d_{i:n}.$ $\sum_{i=1}^{l}c_{i:n}\geq \sum_{i=1}^{l}d_{i:n}.$
 Note that  $\boldsymbol{c}\preceq^{m} \boldsymbol{d}$ implies both
$\boldsymbol{c}\preceq^{m} \boldsymbol{d}$ implies both  $\boldsymbol{c}\preceq_{w} \boldsymbol{d}$ and
$\boldsymbol{c}\preceq_{w} \boldsymbol{d}$ and  $\boldsymbol{c}\preceq^{w} \boldsymbol{d}.$ But, the converse is not always true. For an introduction to majorization order and their applications, are may refer to [Reference Marshall, Olkin and Arnold13].
$\boldsymbol{c}\preceq^{w} \boldsymbol{d}.$ But, the converse is not always true. For an introduction to majorization order and their applications, are may refer to [Reference Marshall, Olkin and Arnold13].
 Throughout this paper, we are concerned only with nonnegative random variables. Now, we discuss some stochastic orderings. For this purpose, let Y and Z be two nonnegative random variables with probability density functions fY and fZ, CDFs FY and FZ, survival functions  $\bar
F_{Y}=1-F_{Y}$ and
$\bar
F_{Y}=1-F_{Y}$ and  $\bar F_{Z}=1-F_{Z},$
$\bar F_{Z}=1-F_{Z},$  $r_{Y}=f_{Y}/\bar
F_{Y}$ and
$r_{Y}=f_{Y}/\bar
F_{Y}$ and  $r_{Z}=f_{Z}/
\bar F_{Z},$ and
$r_{Z}=f_{Z}/
\bar F_{Z},$ and  $ \tilde r_{Y}=f_{Y}/
F_{Y}$ and
$ \tilde r_{Y}=f_{Y}/
F_{Y}$ and  $\tilde r_{Z}=f_{Z}/
F_{Z}$ being the corresponding hazard rate and reversed hazard rate functions, respectively.
$\tilde r_{Z}=f_{Z}/
F_{Z}$ being the corresponding hazard rate and reversed hazard rate functions, respectively.
Definition 2.2. A random variable Y is said to be smaller than Z in the
- • hazard rate order (denoted by  $Y\leq_{hr}Z$) if $Y\leq_{hr}Z$) if $r_{Y}(x)\geq r_{Z}(x)$, for all $r_{Y}(x)\geq r_{Z}(x)$, for all $x;$ $x;$
- • reversed hazard rate order (denoted by  $Y\leq_{rh}Z$) if $Y\leq_{rh}Z$) if $\tilde r_{Y}(x)\leq \tilde r_{Z}(x)$, for all $\tilde r_{Y}(x)\leq \tilde r_{Z}(x)$, for all $x;$ $x;$
- • usual stochastic order (denoted by  $Y\leq_{st}Z$) if $Y\leq_{st}Z$) if $\bar F_{Y}(x)\leq\bar F_{Z}(x)$, for all $\bar F_{Y}(x)\leq\bar F_{Z}(x)$, for all $x;$ $x;$
- • dispersive order (denoted by  $Y\le_{disp}Z$) if
(2.2) $Y\le_{disp}Z$) if
(2.2) \begin{equation*}
F^{-1}_{Y}(\beta)
-F^{-1}_{Y}(\alpha)\le F^{-1}_{Z}(\beta)
-F^{-1}_{Z}(\alpha)\ \text{whenever }\ 0 \lt \alpha\leq\beta \lt 1,
\end{equation*} \begin{equation*}
F^{-1}_{Y}(\beta)
-F^{-1}_{Y}(\alpha)\le F^{-1}_{Z}(\beta)
-F^{-1}_{Z}(\alpha)\ \text{whenever }\ 0 \lt \alpha\leq\beta \lt 1,
\end{equation*}- where  $F^{-1}_{Y}(\cdot)$ and $F^{-1}_{Y}(\cdot)$ and $F^{-1}_{Z}(\cdot)$ are the right-continuous inverses of $F^{-1}_{Z}(\cdot)$ are the right-continuous inverses of $F_{Y}(\cdot)$ and $F_{Y}(\cdot)$ and $F_{Z}(\cdot),$ respectively; $F_{Z}(\cdot),$ respectively;
- • star order (denoted by  $Y\leq_{*}Z$) if $Y\leq_{*}Z$) if $F^{-1}_{Z}F_{Y}(x)$ is star-shaped in $F^{-1}_{Z}F_{Y}(x)$ is star-shaped in $x,$ that is, $x,$ that is, ${F^{-1}_{Z}F_{Y}(x)}/{x}$ is increasing in ${F^{-1}_{Z}F_{Y}(x)}/{x}$ is increasing in $x\geq 0;$ $x\geq 0;$
- • Lorenz order (denoted by  $Y\leq_{Lorenz}Z$) if $Y\leq_{Lorenz}Z$) if \begin{equation*}\frac{1}{E(Y)}\int_{0}^{F^{-1}_{Y}(u)}x dF_{Y}(x)\geq\frac{1}{E(Z)}\int_{0}^{F^{-1}_{Z}(u)}x dF_{Z}(x),\text{for all } u\in (0,1].\end{equation*} \begin{equation*}\frac{1}{E(Y)}\int_{0}^{F^{-1}_{Y}(u)}x dF_{Y}(x)\geq\frac{1}{E(Z)}\int_{0}^{F^{-1}_{Z}(u)}x dF_{Z}(x),\text{for all } u\in (0,1].\end{equation*}
It is known that the star ordering implies the Lorenz ordering. One may refer to [Reference Shaked and Shanthikumar19] for an exhaustive discussion on stochastic orderings. Next, we introduce Schur-convex and Schur-concave functions.
Lemma 2.1. (Theorem 3.A.4 of [Reference Marshall, Olkin and Arnold13]).
 For an open interval  $I\subset R$, a continuously differentiable function
$I\subset R$, a continuously differentiable function  $f:I^n\rightarrow R$ is said to be Schur-convex if and only if it is symmetric on In and
$f:I^n\rightarrow R$ is said to be Schur-convex if and only if it is symmetric on In and  $(x_i-x_j)\big( \frac{\partial f(x)}{\partial x_i}-\frac{\partial f(x)}{\partial x_j}\big)\geq0$ for all i ≠ j and
$(x_i-x_j)\big( \frac{\partial f(x)}{\partial x_i}-\frac{\partial f(x)}{\partial x_j}\big)\geq0$ for all i ≠ j and  $x\in I^n$.
$x\in I^n$.
 Now, we describe briefly the concept of Archimedean copulas. Let F and  $\bar F$ be the joint distribution function and joint survival function of a random vector
$\bar F$ be the joint distribution function and joint survival function of a random vector  $\boldsymbol{X}=(X_1,\ldots,X_n)$. Also, suppose there exist some functions
$\boldsymbol{X}=(X_1,\ldots,X_n)$. Also, suppose there exist some functions  $C(\boldsymbol{v}):[0,1]^n\rightarrow [0,1]$ and
$C(\boldsymbol{v}):[0,1]^n\rightarrow [0,1]$ and  $\hat {C}(\boldsymbol{v}):[0,1]^n\rightarrow [0,1]$ such that, for all
$\hat {C}(\boldsymbol{v}):[0,1]^n\rightarrow [0,1]$ such that, for all  $ x_i,~i\in \mathcal I_n, $ where
$ x_i,~i\in \mathcal I_n, $ where  $\mathcal I_n$ is the index set,
$\mathcal I_n$ is the index set,
 \begin{equation*} F(x_1,\ldots,x_n)=C(F_1(x_1),\ldots,F_n(x_n)),\end{equation*}
\begin{equation*} F(x_1,\ldots,x_n)=C(F_1(x_1),\ldots,F_n(x_n)),\end{equation*} \begin{equation*}\bar{F}(x_1,\ldots,x_n)=\hat{C}(\bar{F_1}(x_1),\ldots,\bar{F_n}(x_n))\end{equation*}
\begin{equation*}\bar{F}(x_1,\ldots,x_n)=\hat{C}(\bar{F_1}(x_1),\ldots,\bar{F_n}(x_n))\end{equation*} hold, then  $C(\boldsymbol{v})$ and
$C(\boldsymbol{v})$ and  $\hat{C}(\boldsymbol{v})$ are said to be the copula and survival copula of X, respectively. Here,
$\hat{C}(\boldsymbol{v})$ are said to be the copula and survival copula of X, respectively. Here,  $F_1,\ldots,F_n$ and
$F_1,\ldots,F_n$ and  $\bar{F_1},\ldots,\bar{F_n}$ are the univariate marginal distribution functions and survival functions of the random variables
$\bar{F_1},\ldots,\bar{F_n}$ are the univariate marginal distribution functions and survival functions of the random variables  $X_1,\ldots,X_n$, respectively.
$X_1,\ldots,X_n$, respectively.
 Now, suppose  $\psi:[0,\infty)\rightarrow[0,1]$ is a non-increasing and continuous function with
$\psi:[0,\infty)\rightarrow[0,1]$ is a non-increasing and continuous function with  $\psi(0)=1$ and
$\psi(0)=1$ and  $\psi(\infty)=0.$ Moreover, suppose
$\psi(\infty)=0.$ Moreover, suppose  $\phi={\psi}^{-1}=\text{sup}\{x\in \mathcal R:\psi(x) \gt v\}$ is the right continuous inverse. Further, let ψ satisfy the conditions (1)
$\phi={\psi}^{-1}=\text{sup}\{x\in \mathcal R:\psi(x) \gt v\}$ is the right continuous inverse. Further, let ψ satisfy the conditions (1)  $(-1)^i{\psi}^{(i)}(x)\geq 0,~ i=0,1,\ldots,d-2,$ and (2)
$(-1)^i{\psi}^{(i)}(x)\geq 0,~ i=0,1,\ldots,d-2,$ and (2)  $(-1)^{d-2}{\psi}^{(d-2)}$ is non-increasing and convex, which imply the generator ψ is d-monotone. A copula Cψ is said to be an Archimedean copula if Cψ can be written as
$(-1)^{d-2}{\psi}^{(d-2)}$ is non-increasing and convex, which imply the generator ψ is d-monotone. A copula Cψ is said to be an Archimedean copula if Cψ can be written as
 \begin{equation*}C_{\psi}(v_1,\ldots,v_n)=\psi({\psi^{-1}(v_1)},\ldots,\psi^{-1}(v_n)),~\text{for all } v_i\in[0,1],~i\in\mathcal{I}_n.\end{equation*}
\begin{equation*}C_{\psi}(v_1,\ldots,v_n)=\psi({\psi^{-1}(v_1)},\ldots,\psi^{-1}(v_n)),~\text{for all } v_i\in[0,1],~i\in\mathcal{I}_n.\end{equation*}For a detailed discussion on Archimedean copulas, one may refer to [Reference McNeil and Nešlehová14, Reference Nelsen17].
Next, we present some important lemmas which are essential for the results developed in the following sections.
Lemma 2.2. (Lemma 7.1 of [Reference Li and Fang10]).
 For two n-dimensional Archimedean copulas  $C_{\psi_1}$ and
$C_{\psi_1}$ and  $C_{\psi_2}$, if
$C_{\psi_2}$, if  $\phi_2\circ\psi_1$ is super-additive, then
$\phi_2\circ\psi_1$ is super-additive, then  $C_{\psi_1}(\boldsymbol{v})\leq C_{\psi_2}(\boldsymbol{v})$, for all
$C_{\psi_1}(\boldsymbol{v})\leq C_{\psi_2}(\boldsymbol{v})$, for all  $\boldsymbol{v}\in[0,1]^n.$ A function f is said to be super-additive if
$\boldsymbol{v}\in[0,1]^n.$ A function f is said to be super-additive if  $ f(x)+f(y)\leq f(x+y),$ for all x and y in the domain of
$ f(x)+f(y)\leq f(x+y),$ for all x and y in the domain of  $f.$ Here, ϕ 2 is the right-continuous inverse of
$f.$ Here, ϕ 2 is the right-continuous inverse of  $\psi_2.$
$\psi_2.$
Lemma 2.3. Let  $f:(0,\infty)\rightarrow (0,\infty)$ be a function given by
$f:(0,\infty)\rightarrow (0,\infty)$ be a function given by  $f(x)=\dfrac{k\mathrm{e}^{kx}}{1-a\mathrm{e}^{-b^k\mathrm{e}^{kx}}},$ where
$f(x)=\dfrac{k\mathrm{e}^{kx}}{1-a\mathrm{e}^{-b^k\mathrm{e}^{kx}}},$ where  $0\leq a \leq 1,$
$0\leq a \leq 1,$  $k,~b \gt 0$ is increasing in
$k,~b \gt 0$ is increasing in  $x,$ for all
$x,$ for all  $x \in (0,\infty)$. Moreover, let
$x \in (0,\infty)$. Moreover, let  $h:(0,\infty)\rightarrow (0,\infty)$ be a function given by
$h:(0,\infty)\rightarrow (0,\infty)$ be a function given by  $h(x)=\dfrac{kx^{k-1}}{1-a\mathrm{e}^{-\left(bx\right)^k}},$ where
$h(x)=\dfrac{kx^{k-1}}{1-a\mathrm{e}^{-\left(bx\right)^k}},$ where  $a,~b\in (0,\infty)$ and
$a,~b\in (0,\infty)$ and  $0 \lt k\leq 1.$ Then, h(x) is decreasing in x for all
$0 \lt k\leq 1.$ Then, h(x) is decreasing in x for all  $x\in (0,\infty)$
$x\in (0,\infty)$
Proof. Taking derivative of f(x) with respect to x, we get
 \begin{equation*}f^{'}(x)=\dfrac{k^2\left(\mathrm{e}^{b^k\mathrm{e}^{kx}}-a\left(b^k\mathrm{e}^{kx}+1\right)\right)\mathrm{e}^{b^k\mathrm{e}^{kx}+kx}}{\left(\mathrm{e}^{b^k\mathrm{e}^{kx}}-a\right)^2}.\end{equation*}
\begin{equation*}f^{'}(x)=\dfrac{k^2\left(\mathrm{e}^{b^k\mathrm{e}^{kx}}-a\left(b^k\mathrm{e}^{kx}+1\right)\right)\mathrm{e}^{b^k\mathrm{e}^{kx}+kx}}{\left(\mathrm{e}^{b^k\mathrm{e}^{kx}}-a\right)^2}.\end{equation*} Now, as  $e^x\geq x+1$ for
$e^x\geq x+1$ for  $x\geq0$, we have
$x\geq0$, we have  $\mathrm{e}^{b^k\mathrm{e}^{kx}}\geq \left(b^k\mathrm{e}^{kx}+1\right)$ which implies
$\mathrm{e}^{b^k\mathrm{e}^{kx}}\geq \left(b^k\mathrm{e}^{kx}+1\right)$ which implies  $\mathrm{e}^{b^k\mathrm{e}^{kx}}\geq a\left(b^k\mathrm{e}^{kx}+1\right),$ for
$\mathrm{e}^{b^k\mathrm{e}^{kx}}\geq a\left(b^k\mathrm{e}^{kx}+1\right),$ for  $0\leq a\leq 1$. Hence,
$0\leq a\leq 1$. Hence,  $f^{'}(x)\geq0$ and therefore f(x) is increasing in
$f^{'}(x)\geq0$ and therefore f(x) is increasing in  $x\in (0,\infty)$.
$x\in (0,\infty)$.
 Further, let  $h(x)=kx^{k-1}g(x)$, where
$h(x)=kx^{k-1}g(x)$, where
 \begin{equation*}g(x)=\dfrac{1}{1-a\mathrm{e}^{-\left(bx\right)^k}}.\end{equation*}
\begin{equation*}g(x)=\dfrac{1}{1-a\mathrm{e}^{-\left(bx\right)^k}}.\end{equation*} Taking derivative of h(x) with respect to  $x,$ we get
$x,$ we get
 \begin{equation*}g^{'}(x)=-\dfrac{ak\left(bx\right)^k\mathrm{e}^{\left(bx\right)^k}}{x\left(\mathrm{e}^{\left(bx\right)^k}-a\right)^2}\leq0.\end{equation*}
\begin{equation*}g^{'}(x)=-\dfrac{ak\left(bx\right)^k\mathrm{e}^{\left(bx\right)^k}}{x\left(\mathrm{e}^{\left(bx\right)^k}-a\right)^2}\leq0.\end{equation*} Hence, as g(x) and  $kx^{k-1}$ are both nonnegative and decreasing functions of
$kx^{k-1}$ are both nonnegative and decreasing functions of  $x\in (0,\infty),$ when
$x\in (0,\infty),$ when  $0 \lt k\leq 1$, we have h(x) to be decreasing in
$0 \lt k\leq 1$, we have h(x) to be decreasing in  $x\in (0,\infty)$, as required.
$x\in (0,\infty)$, as required.
Lemma 2.4. Let  $m_{1}(x): (0,\infty)\rightarrow(0,\infty)$ and
$m_{1}(x): (0,\infty)\rightarrow(0,\infty)$ and  $m_{2}(x): (0,\infty)\rightarrow(0,\infty)$ be functions given by
$m_{2}(x): (0,\infty)\rightarrow(0,\infty)$ be functions given by  $m_{1}(x)=\mathrm{e}^{\mathrm{e}^{kx}}-a(\mathrm{e}^{kx}+1)$ and
$m_{1}(x)=\mathrm{e}^{\mathrm{e}^{kx}}-a(\mathrm{e}^{kx}+1)$ and
 \begin{equation*}m_{2}(x)=\mathrm{e}^{2\mathrm{e}^{kx}}+a\left(\mathrm{e}^{2kx}-3\mathrm{e}^{kx}-2\right)\mathrm{e}^{\mathrm{e}^{kx}}+a^2(\mathrm{e}^{2kx}+3\mathrm{e}^{kx}+1),\end{equation*}
\begin{equation*}m_{2}(x)=\mathrm{e}^{2\mathrm{e}^{kx}}+a\left(\mathrm{e}^{2kx}-3\mathrm{e}^{kx}-2\right)\mathrm{e}^{\mathrm{e}^{kx}}+a^2(\mathrm{e}^{2kx}+3\mathrm{e}^{kx}+1),\end{equation*} where  $a \in (0,1)$. Then, both
$a \in (0,1)$. Then, both  $m_1(x)$ and
$m_1(x)$ and  $m_2(x)$ are nonnegative for
$m_2(x)$ are nonnegative for  $x\in (0,\infty).$
$x\in (0,\infty).$
Proof. Differentiating  $m_{1}(x)$ with respect to x, we get
$m_{1}(x)$ with respect to x, we get
 \begin{equation*}k\mathrm{e}^{kx}\left(\mathrm{e}^{\mathrm{e}^{kx}}-a\right)\geq 0.\end{equation*}
\begin{equation*}k\mathrm{e}^{kx}\left(\mathrm{e}^{\mathrm{e}^{kx}}-a\right)\geq 0.\end{equation*} Because at  $x=0,$ we have
$x=0,$ we have
 \begin{equation*}\mathrm{e}^{\mathrm{e}^{kx}}-a(\mathrm{e}^{kx}+1)\geq 0,\end{equation*}
\begin{equation*}\mathrm{e}^{\mathrm{e}^{kx}}-a(\mathrm{e}^{kx}+1)\geq 0,\end{equation*} the required result as follows. Similarly, differentiating  $m_{2}(x)$ with respect to x, we get
$m_{2}(x)$ with respect to x, we get
 \begin{equation*}m_{2}^{'}(x)=k\mathrm{e}^{kx}\left(3a\mathrm{e}^{3\mathrm{e}^{kx}}+2\mathrm{e}^{2\mathrm{e}^{kx}}+\left(-3a\mathrm{e}^{kx}-5a\right)\mathrm{e}^{\mathrm{e}^{kx}}+2a^2\mathrm{e}^{kx}+3a^2\right).\end{equation*}
\begin{equation*}m_{2}^{'}(x)=k\mathrm{e}^{kx}\left(3a\mathrm{e}^{3\mathrm{e}^{kx}}+2\mathrm{e}^{2\mathrm{e}^{kx}}+\left(-3a\mathrm{e}^{kx}-5a\right)\mathrm{e}^{\mathrm{e}^{kx}}+2a^2\mathrm{e}^{kx}+3a^2\right).\end{equation*}We now show that
 \begin{equation}
k\mathrm{e}^{kx}\left(3a\mathrm{e}^{3\mathrm{e}^{kx}}+2\mathrm{e}^{2\mathrm{e}^{kx}}+\left(-3a\mathrm{e}^{kx}-5a\right)\mathrm{e}^{\mathrm{e}^{kx}}+2a^2\mathrm{e}^{kx}+3a^2\right)\geq 0.
\end{equation}
\begin{equation}
k\mathrm{e}^{kx}\left(3a\mathrm{e}^{3\mathrm{e}^{kx}}+2\mathrm{e}^{2\mathrm{e}^{kx}}+\left(-3a\mathrm{e}^{kx}-5a\right)\mathrm{e}^{\mathrm{e}^{kx}}+2a^2\mathrm{e}^{kx}+3a^2\right)\geq 0.
\end{equation} For this purpose, let us take  $m(x)=\mathrm{e}^{kx}.$ Also, let us set
$m(x)=\mathrm{e}^{kx}.$ Also, let us set
 \begin{equation*}g(m)=3a\mathrm{e}^{2m}+2\mathrm{e}^{m}+\left(-3am-5a\right),\end{equation*}
\begin{equation*}g(m)=3a\mathrm{e}^{2m}+2\mathrm{e}^{m}+\left(-3am-5a\right),\end{equation*} where  $m\geq 1.$ Upon taking partial derivative with respect to m, we get
$m\geq 1.$ Upon taking partial derivative with respect to m, we get  $g^{'}(m)=6ae^{2m}+2e^m-3a$ for
$g^{'}(m)=6ae^{2m}+2e^m-3a$ for  $m\geq 1$. As
$m\geq 1$. As  $g^{''}(m)=12ae^{2m}+2e^m\geq 0$, we have
$g^{''}(m)=12ae^{2m}+2e^m\geq 0$, we have  $g^{'}(m)$ to be an increasing function. As the value of
$g^{'}(m)$ to be an increasing function. As the value of  $g^{'}(1)\geq 0,$ we obtain the inequality in (2.5). Further, since at
$g^{'}(1)\geq 0,$ we obtain the inequality in (2.5). Further, since at  $x=0,$ we have
$x=0,$ we have
 \begin{equation*}\mathrm{e}^{2\mathrm{e}^{kx}}+a\left(\mathrm{e}^{2kx}-3\mathrm{e}^{kx}-2\right)\mathrm{e}^{\mathrm{e}^{kx}}+a^2(\mathrm{e}^{2kx}+3\mathrm{e}^{kx}+1)\geq 0,\end{equation*}
\begin{equation*}\mathrm{e}^{2\mathrm{e}^{kx}}+a\left(\mathrm{e}^{2kx}-3\mathrm{e}^{kx}-2\right)\mathrm{e}^{\mathrm{e}^{kx}}+a^2(\mathrm{e}^{2kx}+3\mathrm{e}^{kx}+1)\geq 0,\end{equation*}the lemma gets established.
Lemma 2.5. Let  $m_3(\lambda): (0,\infty)\rightarrow(0,\infty)$ be a function given by
$m_3(\lambda): (0,\infty)\rightarrow(0,\infty)$ be a function given by
 \begin{equation*}m_{3}(\lambda)=\frac{k\lambda (\lambda x)^{k-1}}{\left(1+(1-a)\mathrm{e}^{\left(x\lambda\right)^k}\right)}
\end{equation*}
\begin{equation*}m_{3}(\lambda)=\frac{k\lambda (\lambda x)^{k-1}}{\left(1+(1-a)\mathrm{e}^{\left(x\lambda\right)^k}\right)}
\end{equation*} where  $0\leq a\leq 1$ and
$0\leq a\leq 1$ and  $k\geq 1$. Then, it is convex with respect to λ.
$k\geq 1$. Then, it is convex with respect to λ.
Proof. Taking first and second order partial derivatives of  $m_{3}(\lambda)$ with respect to
$m_{3}(\lambda)$ with respect to  $\lambda,$ we get
$\lambda,$ we get
 \begin{equation*}\frac{\partial m_{3}(\lambda)}{\partial\lambda}=\frac{k\lambda^{k-1}-(1-a)k\lambda^{k-1}e^{-(\lambda t)^k}-(1-a) \lambda^{2k-1} x^k e^{-(\lambda t)^k}}{\left(1+(1-a)\mathrm{e}^{-\left(x\lambda\right)^k}\right)^2}\end{equation*}
\begin{equation*}\frac{\partial m_{3}(\lambda)}{\partial\lambda}=\frac{k\lambda^{k-1}-(1-a)k\lambda^{k-1}e^{-(\lambda t)^k}-(1-a) \lambda^{2k-1} x^k e^{-(\lambda t)^k}}{\left(1+(1-a)\mathrm{e}^{-\left(x\lambda\right)^k}\right)^2}\end{equation*}and
 \begin{equation*}\frac{\partial^2 m_{3}(\lambda)}{\partial\lambda^2}=\frac{f_1(\lambda)}{\left(\mathrm{e}^{\left(x\lambda\right)^k}+a-1\right)^3},\end{equation*}
\begin{equation*}\frac{\partial^2 m_{3}(\lambda)}{\partial\lambda^2}=\frac{f_1(\lambda)}{\left(\mathrm{e}^{\left(x\lambda\right)^k}+a-1\right)^3},\end{equation*} where  $f_1 (\lambda)=\left(k-1\right)e^{2\left(x\lambda\right)^{k}}+a\left(k\left(x\lambda\right)^{2k}-\left(k-1\right)\left(x\lambda\right)^{k}-2\left(k-1\right)\right)e^{\left(x\lambda\right)^{k}}+a^{2}k\left(x\lambda\right)^{2k}+3a^{2}\left(k-1\right)\left(x\lambda\right)^{k}+a^{2}\left(k-1\right).$ To establish the required result, we only need to show that
$f_1 (\lambda)=\left(k-1\right)e^{2\left(x\lambda\right)^{k}}+a\left(k\left(x\lambda\right)^{2k}-\left(k-1\right)\left(x\lambda\right)^{k}-2\left(k-1\right)\right)e^{\left(x\lambda\right)^{k}}+a^{2}k\left(x\lambda\right)^{2k}+3a^{2}\left(k-1\right)\left(x\lambda\right)^{k}+a^{2}\left(k-1\right).$ To establish the required result, we only need to show that  $f(\lambda)\geq 0.$ We first set
$f(\lambda)\geq 0.$ We first set  $(x\lambda)^k=t$ and then observe that
$(x\lambda)^k=t$ and then observe that
 \begin{equation*}\left(k-1\right)\left(e^{2\left(x\lambda\right)^{k}}+a\left(\left(x\lambda\right)^{2k}-\left(x\lambda\right)^{k}-2\right)e^{\left(x\lambda\right)^{k}}+a^{2}\left(x\lambda\right)^{2k}+3a^{2}\left(x\lambda\right)^{k}+a^{2}\right)\leq f(\lambda)\end{equation*}
\begin{equation*}\left(k-1\right)\left(e^{2\left(x\lambda\right)^{k}}+a\left(\left(x\lambda\right)^{2k}-\left(x\lambda\right)^{k}-2\right)e^{\left(x\lambda\right)^{k}}+a^{2}\left(x\lambda\right)^{2k}+3a^{2}\left(x\lambda\right)^{k}+a^{2}\right)\leq f(\lambda)\end{equation*} for  $k\geq 1$. As
$k\geq 1$. As  $e^t \geq t+1$ for
$e^t \geq t+1$ for  $t\geq 0$, it is enough to show that
$t\geq 0$, it is enough to show that
 \begin{equation*}\left(1+t\right)^{2}+a\left(t^{2}-t-2\right)\left(t+1\right)+a^{2}t^{2}+3a^{2}t+a^{2}\geq 0.\end{equation*}
\begin{equation*}\left(1+t\right)^{2}+a\left(t^{2}-t-2\right)\left(t+1\right)+a^{2}t^{2}+3a^{2}t+a^{2}\geq 0.\end{equation*}It is evident that the above polynomial is greater than 0 at t = 0. Now, upon differentiating the above expression with respect to t, we get
 \begin{equation*}3at^2+2(a^2+1)t+(3a^2-3a+2)\geq 0,\end{equation*}
\begin{equation*}3at^2+2(a^2+1)t+(3a^2-3a+2)\geq 0,\end{equation*}which proves that
 \begin{equation*}\left(1+t\right)^{2}+a\left(t^{2}-t-2\right)\left(t+1\right)+a^{2}t^{2}+3a^{2}t+a^{2}\geq 0.\end{equation*}
\begin{equation*}\left(1+t\right)^{2}+a\left(t^{2}-t-2\right)\left(t+1\right)+a^{2}t^{2}+3a^{2}t+a^{2}\geq 0.\end{equation*} Hence, we get  $f_1(\lambda)\geq 0,$ as required.
$f_1(\lambda)\geq 0,$ as required.
3. Main results
In this section, we establish different comparison results between two series as well as parallel systems, wherein the systems’ components follow extended Weibull distributions with different parameters. The results obtained are in terms of usual stochastic, dispersive and star orders. The modeled parameters are connected with different majorization orders. The main results established here are presented in the following subsection.
3.1. Ordering results based on equal number of variables
 Let us consider two sets of (equal size) of dependent variables  $\{X_{1},\ldots, X_{n}\}$ and
$\{X_{1},\ldots, X_{n}\}$ and  $\{Y_{1},\ldots, Y_{n}\},$ where Xi and Yi follow dependent extended Weibull distributions having different parameters
$\{Y_{1},\ldots, Y_{n}\},$ where Xi and Yi follow dependent extended Weibull distributions having different parameters  $\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{n})$,
$\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{n})$,  $\boldsymbol{\lambda}=(\lambda_{1},\ldots,\lambda_{n}),$
$\boldsymbol{\lambda}=(\lambda_{1},\ldots,\lambda_{n}),$  $\boldsymbol{k}=(k_{1},\ldots,k_{n})$ and
$\boldsymbol{k}=(k_{1},\ldots,k_{n})$ and  $\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{n})$,
$\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{n})$,  $\boldsymbol{\mu}=(\mu_{1},\ldots,\mu_{n}),$
$\boldsymbol{\mu}=(\mu_{1},\ldots,\mu_{n}),$  $\boldsymbol{l}=(l_{1},\ldots,l_{n}),$ respectively. In the following, we present some results for comparing two extreme order statistics according to their survival functions.
$\boldsymbol{l}=(l_{1},\ldots,l_{n}),$ respectively. In the following, we present some results for comparing two extreme order statistics according to their survival functions.
Theorem 3.1. Let  $X_i\sim EW(\alpha, \lambda_i,k)
$
$X_i\sim EW(\alpha, \lambda_i,k)
$  $(i=1,\ldots,n)$ and
$(i=1,\ldots,n)$ and  $Y_i\sim EW(\alpha, \mu_i,k)
$
$Y_i\sim EW(\alpha, \mu_i,k)
$  $(i=1,\ldots,n)$ have their associated Archimedean survival copulas to be with generators ψ 1 and ψ 2, respectively. Further, suppose
$(i=1,\ldots,n)$ have their associated Archimedean survival copulas to be with generators ψ 1 and ψ 2, respectively. Further, suppose  $\phi_2\circ\psi_1$ is super-additive and ψ 1 is log-concave. Then, for
$\phi_2\circ\psi_1$ is super-additive and ψ 1 is log-concave. Then, for  $0 \lt \alpha\leq 1$, we have
$0 \lt \alpha\leq 1$, we have
 \begin{equation*}(\log{\lambda_1},\ldots,\log{\lambda_n})\succeq_{w}(\log{\mu_1}, \ldots, \log{\mu_n})\Rightarrow Y_{1:n}\succeq_{st}X_{1:n}.\end{equation*}
\begin{equation*}(\log{\lambda_1},\ldots,\log{\lambda_n})\succeq_{w}(\log{\mu_1}, \ldots, \log{\mu_n})\Rightarrow Y_{1:n}\succeq_{st}X_{1:n}.\end{equation*}Proof. The distribution functions of  $X_{1:n}$ and
$X_{1:n}$ and  $Y_{1:n}$ can be written as
$Y_{1:n}$ can be written as
 \begin{equation*}
F_{X_{1:n}}(x) = 1-\psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(\frac{\alpha e^{-(x\lambda_m)^k}}{1-\bar{\alpha}e^{-(x\lambda_m)^k}}\Big)\Big]=1-\psi_{1}\left[\sum_{m=1}^{n}\phi_{1}(S_{1}(\lambda_{m}))\right]
\end{equation*}
\begin{equation*}
F_{X_{1:n}}(x) = 1-\psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(\frac{\alpha e^{-(x\lambda_m)^k}}{1-\bar{\alpha}e^{-(x\lambda_m)^k}}\Big)\Big]=1-\psi_{1}\left[\sum_{m=1}^{n}\phi_{1}(S_{1}(\lambda_{m}))\right]
\end{equation*}and
 \begin{equation*}
F_{Y_{1:n}}(x) = 1-\psi_{2}\Big[\sum_{m=1}^{n} \phi_{2}\Big(\frac{\alpha e^{-(x\mu_m)^k}}{1-\bar{\alpha}e^{-(x\mu_m)^k}}\Big)\Big]=1-\psi_{2}\left[\sum_{m=1}^{n}\phi_{2}(S_{1}(\mu_{m}))\right],
\end{equation*}
\begin{equation*}
F_{Y_{1:n}}(x) = 1-\psi_{2}\Big[\sum_{m=1}^{n} \phi_{2}\Big(\frac{\alpha e^{-(x\mu_m)^k}}{1-\bar{\alpha}e^{-(x\mu_m)^k}}\Big)\Big]=1-\psi_{2}\left[\sum_{m=1}^{n}\phi_{2}(S_{1}(\mu_{m}))\right],
\end{equation*} where  $S_{1}(\mu_{m})=\frac{\alpha e^{-(x\mu_m)^k}}{1-\bar{\alpha}e^{-(x\mu_m)^k}}$, respectively. Now, from Lemma 2.2, super-additivity property of
$S_{1}(\mu_{m})=\frac{\alpha e^{-(x\mu_m)^k}}{1-\bar{\alpha}e^{-(x\mu_m)^k}}$, respectively. Now, from Lemma 2.2, super-additivity property of  $\phi_{2}\circ\psi_{1}$ yields
$\phi_{2}\circ\psi_{1}$ yields
 \begin{equation*}
1-\psi_{2}\left[\sum_{m=1}^{n}\phi_{2}(S_{1}(\mu_{m}))\right]
\leq1-\psi_{1}\left[\sum_{m=1}^{n}\phi_{1}(S_{1}(\mu_{m}))\right].
\end{equation*}
\begin{equation*}
1-\psi_{2}\left[\sum_{m=1}^{n}\phi_{2}(S_{1}(\mu_{m}))\right]
\leq1-\psi_{1}\left[\sum_{m=1}^{n}\phi_{1}(S_{1}(\mu_{m}))\right].
\end{equation*}Therefore, to establish the required result, we only need to prove that
 \begin{equation*}
1-\psi_{1}\left[\sum_{m=1}^{n}\phi_{1}(S_{1}(\lambda_{m}))\right]
\geq1-\psi_{1}\left[\sum_{m=1}^{n}\phi_{1}(S_{1}(\mu_{m}))\right].
\end{equation*}
\begin{equation*}
1-\psi_{1}\left[\sum_{m=1}^{n}\phi_{1}(S_{1}(\lambda_{m}))\right]
\geq1-\psi_{1}\left[\sum_{m=1}^{n}\phi_{1}(S_{1}(\mu_{m}))\right].
\end{equation*} Now, let  $\delta(\boldsymbol{e^v})=\psi_{1}\left[\sum_{m=1}^{n}\phi_{1}(S_{1}(\boldsymbol{e^v}))\right],$ where
$\delta(\boldsymbol{e^v})=\psi_{1}\left[\sum_{m=1}^{n}\phi_{1}(S_{1}(\boldsymbol{e^v}))\right],$ where  $\boldsymbol{e^v}=(e^{v_{1}},\ldots,e^{v_{n}})$ and
$\boldsymbol{e^v}=(e^{v_{1}},\ldots,e^{v_{n}})$ and  $(v_{1},\ldots,v_{n})=(\log{\lambda_{1}},\ldots, \log{\lambda_{n}}).$ Due to Theorem A.8 of [Reference Marshall, Olkin and Arnold13], we just have to show that
$(v_{1},\ldots,v_{n})=(\log{\lambda_{1}},\ldots, \log{\lambda_{n}}).$ Due to Theorem A.8 of [Reference Marshall, Olkin and Arnold13], we just have to show that  $\delta(\boldsymbol{e^v})$ is increasing and Schur-convex in
$\delta(\boldsymbol{e^v})$ is increasing and Schur-convex in  $\boldsymbol{v}.$ Taking partial derivative of
$\boldsymbol{v}.$ Taking partial derivative of  $\delta(\boldsymbol{e^v})$ with respect to vi, for
$\delta(\boldsymbol{e^v})$ with respect to vi, for  $i=1,\ldots,n,$ we have
$i=1,\ldots,n,$ we have
 \begin{equation}
\frac{\partial \delta(\boldsymbol{e^v})}{\partial v_{i}}=\eta(v_i)\chi(v_i) \psi_{1}'\Big[\sum_{m=1}^{n}\phi_{1}\Big(S_{1}(\boldsymbol{e^v})\Big)\Big]\geq 0,
\end{equation}
\begin{equation}
\frac{\partial \delta(\boldsymbol{e^v})}{\partial v_{i}}=\eta(v_i)\chi(v_i) \psi_{1}'\Big[\sum_{m=1}^{n}\phi_{1}\Big(S_{1}(\boldsymbol{e^v})\Big)\Big]\geq 0,
\end{equation} where  $\eta(v_i)= \frac{ke^{v_{i}k}}{1-\bar{\alpha}e^{-(xe^{v_{i}})^k}}$ and
$\eta(v_i)= \frac{ke^{v_{i}k}}{1-\bar{\alpha}e^{-(xe^{v_{i}})^k}}$ and  $\chi(v_i)= \frac{S_{1}({e^{v_{i}}})}{\psi_{1}'\Big(\phi_{1}\Big(S_{1}({e^{v_{i}}})\Big)\Big)},$ for
$\chi(v_i)= \frac{S_{1}({e^{v_{i}}})}{\psi_{1}'\Big(\phi_{1}\Big(S_{1}({e^{v_{i}}})\Big)\Big)},$ for  $i={1,\ldots,n}.$ Therefore, from (3.6), we can see that
$i={1,\ldots,n}.$ Therefore, from (3.6), we can see that  $\delta(\boldsymbol{e^v})$ is increasing in
$\delta(\boldsymbol{e^v})$ is increasing in  $v_i,$ for
$v_i,$ for  $i=1,\ldots,n$. Now, the derivative of
$i=1,\ldots,n$. Now, the derivative of  $\chi(v_i)$ with respect to vi, is given by
$\chi(v_i)$ with respect to vi, is given by
 \begin{align*}
{\Big[\psi_{1}'\Big(\phi_{1}\Big(S_{1}({e^{v_{i}}})\Big)\Big)\Big]^2}\frac{\partial \chi(v_{i})}{\partial v_{i}}&=ke^{v_{i}k}\frac{S_{1}({e^{v_{i}}})}{\psi_{1}'\Big(\phi_{1}\Big(S_{1}({e^{v_{i}}})\Big)\Big)}\Big[\Big(\psi_{1}'\Big(\phi_{1}\Big(S_{1}({e^{v_{i}}})\Big)\Big)\Big)^2\nonumber\\
&-S_{1}({e^{v_{i}}})
\times{\psi_{1}^{''}\Big(\phi_{1}\Big(S_{1}({e^{v_{i}}})\Big)\Big)\Big]}\leq 0,
\end{align*}
\begin{align*}
{\Big[\psi_{1}'\Big(\phi_{1}\Big(S_{1}({e^{v_{i}}})\Big)\Big)\Big]^2}\frac{\partial \chi(v_{i})}{\partial v_{i}}&=ke^{v_{i}k}\frac{S_{1}({e^{v_{i}}})}{\psi_{1}'\Big(\phi_{1}\Big(S_{1}({e^{v_{i}}})\Big)\Big)}\Big[\Big(\psi_{1}'\Big(\phi_{1}\Big(S_{1}({e^{v_{i}}})\Big)\Big)\Big)^2\nonumber\\
&-S_{1}({e^{v_{i}}})
\times{\psi_{1}^{''}\Big(\phi_{1}\Big(S_{1}({e^{v_{i}}})\Big)\Big)\Big]}\leq 0,
\end{align*} since ψ 1 is decreasing and log-concave. This implies that  $\chi(v_{i})$ is decreasing and nonpositive in
$\chi(v_{i})$ is decreasing and nonpositive in  $v_i,$ for
$v_i,$ for  $i=1,\ldots,n$. Also,
$i=1,\ldots,n$. Also,  $\eta(v_{i})$ is increasing and nonnegative in
$\eta(v_{i})$ is increasing and nonnegative in  $v_i,$ from Lemma 2.3. Therefore,
$v_i,$ from Lemma 2.3. Therefore,  $\eta(v_{i})\chi(v_{i})$ is decreasing in
$\eta(v_{i})\chi(v_{i})$ is decreasing in  $v_i,$ for
$v_i,$ for  $i=1,\ldots,n$. Next, we have
$i=1,\ldots,n$. Next, we have
 \begin{align*}
&(v_i-v_j)\Big(\frac{\partial \delta(\boldsymbol{e^v})}{\partial e^{v_{i}}}-\frac{\partial \delta(\boldsymbol{e^v})}{\partial e^{v_{j}}}\Big)\nonumber\\
&=x^{k}(v_i-v_j)[\eta(v_i)\chi(v_i)-\eta(v_j)\chi(v_j)]\times\psi_{1}'\Big[\sum_{m=1}^{n}\phi_{1}\Big(S_{1}({e^{v_{i}}})\Big)\Big]\geq 0.
\end{align*}
\begin{align*}
&(v_i-v_j)\Big(\frac{\partial \delta(\boldsymbol{e^v})}{\partial e^{v_{i}}}-\frac{\partial \delta(\boldsymbol{e^v})}{\partial e^{v_{j}}}\Big)\nonumber\\
&=x^{k}(v_i-v_j)[\eta(v_i)\chi(v_i)-\eta(v_j)\chi(v_j)]\times\psi_{1}'\Big[\sum_{m=1}^{n}\phi_{1}\Big(S_{1}({e^{v_{i}}})\Big)\Big]\geq 0.
\end{align*} Hence,  $\delta(\boldsymbol{e^v})$ is Schur-convex in v from Lemma 2.1, which completes the proof of the theorem.
$\delta(\boldsymbol{e^v})$ is Schur-convex in v from Lemma 2.1, which completes the proof of the theorem.
 From Theorem 3.1, we can say that if α and β, the shape parameters k and l are the same and scalar-valued, then under the stated assumptions, we can say that the lifetime  $X_{1:n}$ is stochastically less than the lifetime
$X_{1:n}$ is stochastically less than the lifetime  $Y_{1:n}.$
$Y_{1:n}.$
 Note in Theorem 3.1 that we have considered the tilt parameter α to lie in  $(0,1].$ A natural question that arises is whether under the same condition, we can establish the inequality between two largest order statistics with respect to the usual stochastic order. The following counterexample gives the answer to be negative.
$(0,1].$ A natural question that arises is whether under the same condition, we can establish the inequality between two largest order statistics with respect to the usual stochastic order. The following counterexample gives the answer to be negative.
Counterexample 3.1. Let  $X_i \sim EW(\alpha,\lambda_i,k)$ (
$X_i \sim EW(\alpha,\lambda_i,k)$ ( $i=1,2,3$) and
$i=1,2,3$) and  $Y_i \sim EW(\alpha,\mu_i,k)$ (
$Y_i \sim EW(\alpha,\mu_i,k)$ ( $i=1,2,3$). Set α = 0.55,
$i=1,2,3$). Set α = 0.55,  $(\lambda_1, \lambda_2, \lambda_3)=(2.14,1.4,1)$ and
$(\lambda_1, \lambda_2, \lambda_3)=(2.14,1.4,1)$ and  $(\mu_1, \mu_2, \mu_3)=(0.77, 0.8, 0.8)$. It is then easy to see that
$(\mu_1, \mu_2, \mu_3)=(0.77, 0.8, 0.8)$. It is then easy to see that  $(\log{\lambda_1},\log{\lambda_2}, \log{\lambda_3})\succeq_{w}(\log{\mu_1}, \log{\mu_2},\log{\mu_3})$. Now, suppose we choose the Gumbel–Hougaard copula with parameters
$(\log{\lambda_1},\log{\lambda_2}, \log{\lambda_3})\succeq_{w}(\log{\mu_1}, \log{\mu_2},\log{\mu_3})$. Now, suppose we choose the Gumbel–Hougaard copula with parameters  $\theta_1=3$ and
$\theta_1=3$ and  $\theta_2=0.6$ and k = 1.63. Then, Figure 1 presents plots of
$\theta_2=0.6$ and k = 1.63. Then, Figure 1 presents plots of  $F_{X_{3:3}}(x)$ and
$F_{X_{3:3}}(x)$ and  $F_{Y_{3:3}}(x)$, from which it is evident that when
$F_{Y_{3:3}}(x)$, from which it is evident that when  $k \geq 1$, the graph of
$k \geq 1$, the graph of  $F_{X_{3:3}}(x)$ intersects with that of
$F_{X_{3:3}}(x)$ intersects with that of  $F_{Y_{3:3}}(x) $ for some
$F_{Y_{3:3}}(x) $ for some  $x\geq 0,$ which violates the statement of Theorem 3.1, where the distribution function of
$x\geq 0,$ which violates the statement of Theorem 3.1, where the distribution function of  $X_{3:3}$ is given by
$X_{3:3}$ is given by
 \begin{align}
F_{X_{3:3}}(x)&=\exp\Bigg\{1-\Bigg(\left(1-\log\left(\frac{1-\mathrm{e}^{-\left(\lambda_1x\right)^k}}{1-\bar{\alpha}\mathrm{e}^{-\left(\lambda_1x\right)^k}}\right)\right)^\frac{1}{\theta_1}+\left(1-\log\left(\frac{1-\mathrm{e}^{-\left(\lambda_2x\right)^k}}{1-\bar{\alpha}\mathrm{e}^{-\left(\lambda_2x\right)^k}}\right)\right)^\frac{1}{\theta_1}\nonumber\\
&+\left(1-\log\left(\frac{1-\mathrm{e}^{-\left(\lambda_3x\right)^k}}{1-\bar{\alpha}\mathrm{e}^{-\left(\lambda_3x\right)^k}}\right)\right)^\frac{1}{\theta_1}-2\Bigg)^{\theta_1}\Bigg\}.
\end{align}
\begin{align}
F_{X_{3:3}}(x)&=\exp\Bigg\{1-\Bigg(\left(1-\log\left(\frac{1-\mathrm{e}^{-\left(\lambda_1x\right)^k}}{1-\bar{\alpha}\mathrm{e}^{-\left(\lambda_1x\right)^k}}\right)\right)^\frac{1}{\theta_1}+\left(1-\log\left(\frac{1-\mathrm{e}^{-\left(\lambda_2x\right)^k}}{1-\bar{\alpha}\mathrm{e}^{-\left(\lambda_2x\right)^k}}\right)\right)^\frac{1}{\theta_1}\nonumber\\
&+\left(1-\log\left(\frac{1-\mathrm{e}^{-\left(\lambda_3x\right)^k}}{1-\bar{\alpha}\mathrm{e}^{-\left(\lambda_3x\right)^k}}\right)\right)^\frac{1}{\theta_1}-2\Bigg)^{\theta_1}\Bigg\}.
\end{align}
Figure 1. Plots of  ${F}_{X_{3:3}}(x)$ and
${F}_{X_{3:3}}(x)$ and  ${F}_{Y_{3:3}}(x)$ in Counterexample 3.1, where the red line corresponds to
${F}_{Y_{3:3}}(x)$ in Counterexample 3.1, where the red line corresponds to  ${F}_{X_{3:3}}(x)$ and the blue line corresponds to
${F}_{X_{3:3}}(x)$ and the blue line corresponds to  ${F}_{Y_{3:3}}(x)$.
${F}_{Y_{3:3}}(x)$.
 The distribution function of  $Y_{3:3}$ can be similarly obtained upon replacing
$Y_{3:3}$ can be similarly obtained upon replacing  $(\lambda_{1},\lambda_{2},\lambda_{3})$ by
$(\lambda_{1},\lambda_{2},\lambda_{3})$ by  $(\mu_{1},\mu_{2},\mu_{3})$.
$(\mu_{1},\mu_{2},\mu_{3})$.
 So, from this counterexample, we show that in order to establish comparisons results between the lifetimes of  $X_{n:n}$ and
$X_{n:n}$ and  $Y_{n:n},$ we require some other sufficient conditions.
$Y_{n:n},$ we require some other sufficient conditions.
Theorem 3.2 Let  $X_i\sim EW(\alpha, \lambda_i,k)$
$X_i\sim EW(\alpha, \lambda_i,k)$  $(i=1,\ldots,n)$ and
$(i=1,\ldots,n)$ and  $Y_i\sim EW(\alpha, \mu_i,k) $
$Y_i\sim EW(\alpha, \mu_i,k) $  $(i=1,\ldots,n)$ where
$(i=1,\ldots,n)$ where  $0 \lt k\leq1,$ and their associated Archimedean copulas be with generators ψ 1 and ψ 2, respectively. Also, suppose
$0 \lt k\leq1,$ and their associated Archimedean copulas be with generators ψ 1 and ψ 2, respectively. Also, suppose  $\phi_2\circ\psi_1$ is super-additive. Then, for
$\phi_2\circ\psi_1$ is super-additive. Then, for  $0 \lt \alpha\leq 1$, we have
$0 \lt \alpha\leq 1$, we have
 \begin{equation*}\boldsymbol{\lambda}\succeq^{w}\boldsymbol{\mu}\Rightarrow X_{n:n}\succeq_{st}Y_{n:n}.\end{equation*}
\begin{equation*}\boldsymbol{\lambda}\succeq^{w}\boldsymbol{\mu}\Rightarrow X_{n:n}\succeq_{st}Y_{n:n}.\end{equation*}Proof. The distribution functions of  $X_{n:n}$ and
$X_{n:n}$ and  $Y_{n:n}$ can be written as
$Y_{n:n}$ can be written as
 \begin{equation*}
F_{X_{n:n}}(x) = \psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(\frac{1-e^{-(x\lambda_m)^k}}{1-\bar{\alpha}e^{-(x\lambda_m)^k}}\Big)\Big]=\psi_{1}\left[\phi_{1}\left(\sum_{m=1}^{n}S_{2}(\lambda_{m})\right)\right]
\end{equation*}
\begin{equation*}
F_{X_{n:n}}(x) = \psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(\frac{1-e^{-(x\lambda_m)^k}}{1-\bar{\alpha}e^{-(x\lambda_m)^k}}\Big)\Big]=\psi_{1}\left[\phi_{1}\left(\sum_{m=1}^{n}S_{2}(\lambda_{m})\right)\right]
\end{equation*}and
 \begin{equation*}
F_{Y_{n:n}}(x) =\psi_{2}\Big[\sum_{m=1}^{n} \phi_{2}\Big(\frac{1-e^{-(x\mu_m)^k}}{1-\bar{\alpha}e^{-(x\mu_m)^k}}\Big)\Big]= \psi_{2}\left[\phi_{2}\left(\sum_{m=1}^{n}S_{2}(\mu_{m})\right)\right],
\end{equation*}
\begin{equation*}
F_{Y_{n:n}}(x) =\psi_{2}\Big[\sum_{m=1}^{n} \phi_{2}\Big(\frac{1-e^{-(x\mu_m)^k}}{1-\bar{\alpha}e^{-(x\mu_m)^k}}\Big)\Big]= \psi_{2}\left[\phi_{2}\left(\sum_{m=1}^{n}S_{2}(\mu_{m})\right)\right],
\end{equation*} where  $S_{2}(\mu_{m})=\frac{1-e^{-(x\mu_m)^k}}{1-\bar{\alpha}e^{-(x\mu_m)^k}}.$ Now, from Lemma 2.2, the super-additivity of ϕ 2 o ψ 1 implies that
$S_{2}(\mu_{m})=\frac{1-e^{-(x\mu_m)^k}}{1-\bar{\alpha}e^{-(x\mu_m)^k}}.$ Now, from Lemma 2.2, the super-additivity of ϕ 2 o ψ 1 implies that
 \begin{equation*}
\psi_{2}\left[\phi_{2}\left(\sum_{m=1}^{n}S_{2}(\mu_{m})\right)\right]
\geq \psi_{1}\left[\phi_{1}\left(\sum_{m=1}^{n}S_{2}(\mu_{m})\right)\right].
\end{equation*}
\begin{equation*}
\psi_{2}\left[\phi_{2}\left(\sum_{m=1}^{n}S_{2}(\mu_{m})\right)\right]
\geq \psi_{1}\left[\phi_{1}\left(\sum_{m=1}^{n}S_{2}(\mu_{m})\right)\right].
\end{equation*}So, in order to establish the required result, we only need to prove that
 \begin{equation*}
\psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{2}(\mu_{m})\Big)\Big]\leq \psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{2}(\mu_{m})\Big)\Big].
\end{equation*}
\begin{equation*}
\psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{2}(\mu_{m})\Big)\Big]\leq \psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{2}(\mu_{m})\Big)\Big].
\end{equation*} Let us now define  $\delta_{1}({\boldsymbol{\lambda}})=\psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{2}(\lambda_{m})\Big)\Big]$, where
$\delta_{1}({\boldsymbol{\lambda}})=\psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{2}(\lambda_{m})\Big)\Big]$, where  ${\boldsymbol{\lambda}}=(\lambda_1,\ldots,\lambda_n)$. Due to Theorem A.8 of [Reference Marshall, Olkin and Arnold13], we just have to show that
${\boldsymbol{\lambda}}=(\lambda_1,\ldots,\lambda_n)$. Due to Theorem A.8 of [Reference Marshall, Olkin and Arnold13], we just have to show that  $\delta_{1}(\boldsymbol{\lambda})$ is increasing and Schur-concave in λ. Taking partial derivative of
$\delta_{1}(\boldsymbol{\lambda})$ is increasing and Schur-concave in λ. Taking partial derivative of  $\delta_1({\boldsymbol{\lambda}})$ with respect to
$\delta_1({\boldsymbol{\lambda}})$ with respect to  $\lambda_i,$ for i =
$\lambda_i,$ for i =  $1,\ldots,n,$ we get
$1,\ldots,n,$ we get
 \begin{equation*}
\frac{\partial \delta_{1}(\boldsymbol{\lambda})}{\partial \lambda_i}=\eta_1(\lambda_i)\chi_1(\lambda_i)\psi_{1}'\Big[\sum_{m=1}^{n}\phi_{1}\Big(S_{2}(\lambda_{m})\Big)\Big]\geq 0,
\end{equation*}
\begin{equation*}
\frac{\partial \delta_{1}(\boldsymbol{\lambda})}{\partial \lambda_i}=\eta_1(\lambda_i)\chi_1(\lambda_i)\psi_{1}'\Big[\sum_{m=1}^{n}\phi_{1}\Big(S_{2}(\lambda_{m})\Big)\Big]\geq 0,
\end{equation*} where  $\eta_{1}(\lambda_i)= \frac{k\lambda_i^{k-1}}{1-\bar{\alpha}e^{-(x\lambda_i)^k}}
\text{~~and~~} \chi_{1}(\lambda_i)= \frac{S_{2}(\lambda_{i})}{\psi_{1}'\Big(\phi_{1}\Big(S_{2}(\lambda_{i})\Big)\Big)}.
$ Now,
$\eta_{1}(\lambda_i)= \frac{k\lambda_i^{k-1}}{1-\bar{\alpha}e^{-(x\lambda_i)^k}}
\text{~~and~~} \chi_{1}(\lambda_i)= \frac{S_{2}(\lambda_{i})}{\psi_{1}'\Big(\phi_{1}\Big(S_{2}(\lambda_{i})\Big)\Big)}.
$ Now,  $\eta_{1}(\lambda_i)$ is nonnegative and decreasing in λi, for
$\eta_{1}(\lambda_i)$ is nonnegative and decreasing in λi, for  $0 \lt k\leq 1$, from Lemma 2.3 and
$0 \lt k\leq 1$, from Lemma 2.3 and  $\chi_{1}(\lambda_i)$ is nonpositive. Taking derivative of
$\chi_{1}(\lambda_i)$ is nonpositive. Taking derivative of  $\chi_{1}(\lambda_i)$ with respect to λi, we get
$\chi_{1}(\lambda_i)$ with respect to λi, we get
 \begin{align*}
\frac{\partial \chi_{1}(\lambda_i)}{\partial \lambda_{i}}=&-\Big[\Big(\psi_{1}'\Big(\phi_{1}\Big(S_{2}(\lambda_{i})\Big)\Big)\Big)^2+(1-S_{2}(\lambda_{i}))\psi_{1}''\Big(\phi_{1}\Big(S_{2}(\lambda_{i})\Big)\Big)\Big]\nonumber\\
&\times k\lambda_i^{k-1}\frac{\frac{x^k\alpha e^{-(x\lambda_i)^k}}{(1-\bar{\alpha}e^{-(x\lambda_i)^k})^2}}{\psi_{1}'\Big(\phi_{1}\Big(S_{2}(\lambda_{i})\Big)\Big)}\times \frac{1}{\Big[\psi_{1}'\Big(\phi_{1}\Big(S_{2}(\lambda_{i})\Big)\Big)\Big]^2}\geq 0,
\end{align*}
\begin{align*}
\frac{\partial \chi_{1}(\lambda_i)}{\partial \lambda_{i}}=&-\Big[\Big(\psi_{1}'\Big(\phi_{1}\Big(S_{2}(\lambda_{i})\Big)\Big)\Big)^2+(1-S_{2}(\lambda_{i}))\psi_{1}''\Big(\phi_{1}\Big(S_{2}(\lambda_{i})\Big)\Big)\Big]\nonumber\\
&\times k\lambda_i^{k-1}\frac{\frac{x^k\alpha e^{-(x\lambda_i)^k}}{(1-\bar{\alpha}e^{-(x\lambda_i)^k})^2}}{\psi_{1}'\Big(\phi_{1}\Big(S_{2}(\lambda_{i})\Big)\Big)}\times \frac{1}{\Big[\psi_{1}'\Big(\phi_{1}\Big(S_{2}(\lambda_{i})\Big)\Big)\Big]^2}\geq 0,
\end{align*} which shows that  $\chi_{1}(\lambda_i)$ is nonpositive and increasing in
$\chi_{1}(\lambda_i)$ is nonpositive and increasing in  $\lambda_i,$ for
$\lambda_i,$ for  $i={1,2,\ldots,n}$. Also,
$i={1,2,\ldots,n}$. Also,  $\eta_{1}(\lambda_i)$ is nonnegative and decreasing. Hence,
$\eta_{1}(\lambda_i)$ is nonnegative and decreasing. Hence,  $\eta_{1}(\lambda_i) \chi_{1}(\lambda_i)$ is increasing in
$\eta_{1}(\lambda_i) \chi_{1}(\lambda_i)$ is increasing in  $\lambda_i,$ for
$\lambda_i,$ for  $i={1,2,\ldots,n}$. Therefore, for
$i={1,2,\ldots,n}$. Therefore, for  $i\neq j,$
$i\neq j,$
 \begin{align*}
&(\lambda_i-\lambda_j)\Big(\frac{\partial \delta_{1}(\boldsymbol{\lambda})}{\partial \lambda_{i}}-\frac{\partial \delta_{1}(\boldsymbol{\lambda})}{\partial \lambda_{j}}\Big)\nonumber\\
&=(\lambda_i-\lambda_j)x^k\psi_{1}'\Big[\sum_{m=1}^{n}\phi_{1}\Big(S_{2}(\lambda_{m})\Big)\Big][\eta_{1}(\lambda_i)\chi_{1}(\lambda_i)-\eta_{1}(\lambda_j)\chi_{1}(\lambda_j)]\leq 0,
\end{align*}
\begin{align*}
&(\lambda_i-\lambda_j)\Big(\frac{\partial \delta_{1}(\boldsymbol{\lambda})}{\partial \lambda_{i}}-\frac{\partial \delta_{1}(\boldsymbol{\lambda})}{\partial \lambda_{j}}\Big)\nonumber\\
&=(\lambda_i-\lambda_j)x^k\psi_{1}'\Big[\sum_{m=1}^{n}\phi_{1}\Big(S_{2}(\lambda_{m})\Big)\Big][\eta_{1}(\lambda_i)\chi_{1}(\lambda_i)-\eta_{1}(\lambda_j)\chi_{1}(\lambda_j)]\leq 0,
\end{align*} which shows that  $\delta_1(\boldsymbol{\lambda})$ is Schur-concave in λ by Lemma 2.1. Hence, the theorem.
$\delta_1(\boldsymbol{\lambda})$ is Schur-concave in λ by Lemma 2.1. Hence, the theorem.
Remark 3.1. In Theorem 3.2, if we take  $k=1,$ we simply get the result in Theorem 1 of [Reference Barmalzan, Ayat, Balakrishnan and Roozegar5].
$k=1,$ we simply get the result in Theorem 1 of [Reference Barmalzan, Ayat, Balakrishnan and Roozegar5].
Remark 3.2. It is important to note that the condition “ $\phi_{2} \circ \psi_1$ is super-additive” in Theorem 3.2 is quite general and is easy to verify for many well-known Archimedean copulas. For example, for the Gumbel–Hougaard copula with generator
$\phi_{2} \circ \psi_1$ is super-additive” in Theorem 3.2 is quite general and is easy to verify for many well-known Archimedean copulas. For example, for the Gumbel–Hougaard copula with generator  $\psi(t)=e^{1-(1+t)^{\theta}}$ for
$\psi(t)=e^{1-(1+t)^{\theta}}$ for  $\theta \in [1,\infty)$, it is easy to see that
$\theta \in [1,\infty)$, it is easy to see that  $\log \psi(t)=1-(1+t)^{\theta}$ is concave in
$\log \psi(t)=1-(1+t)^{\theta}$ is concave in  $t \in [0,1]$. Let us now set
$t \in [0,1]$. Let us now set  $\psi_{1}(t)=e^{1-(1+t)^{\alpha}}$ and
$\psi_{1}(t)=e^{1-(1+t)^{\alpha}}$ and  $\psi_{2}(t)=e^{1-(1+t)^{\beta}}$. It can then be observed that
$\psi_{2}(t)=e^{1-(1+t)^{\beta}}$. It can then be observed that  $\phi_{2} \circ \psi_1(t)=(1+t)^{{\alpha/\beta}-1}.$ Taking derivative of
$\phi_{2} \circ \psi_1(t)=(1+t)^{{\alpha/\beta}-1}.$ Taking derivative of  $\phi_{2} \circ \psi_1(t)$ twice with respect to t, it can be seen that
$\phi_{2} \circ \psi_1(t)$ twice with respect to t, it can be seen that  $[\phi_{2} \circ \psi_1(t)]^{\prime \prime}=(\frac{\alpha}{\beta})(\frac{\alpha}{\beta} -1 ) (1+t)^{{\alpha/\beta}-1} \ge 0 $ for
$[\phi_{2} \circ \psi_1(t)]^{\prime \prime}=(\frac{\alpha}{\beta})(\frac{\alpha}{\beta} -1 ) (1+t)^{{\alpha/\beta}-1} \ge 0 $ for  $\alpha \gt \beta \gt 1$, which implies the super-additivity of
$\alpha \gt \beta \gt 1$, which implies the super-additivity of  $\phi_{2} \circ \psi_1(t)$.
$\phi_{2} \circ \psi_1(t)$.
 Now, we present an example that demonstrates that if we consider two parallel systems with their components being mutually dependent with Gumbel–Hougaard copula having parameters  $\theta_1=8.9$ and
$\theta_1=8.9$ and  $\theta_2=3.05$ and following extended Weibull distributions, then under the setup of Theorem 3.2, the survival function of one parallel system is less than that of the other.
$\theta_2=3.05$ and following extended Weibull distributions, then under the setup of Theorem 3.2, the survival function of one parallel system is less than that of the other.
Example 3.1. Let  $X_i \sim EW(\alpha,\lambda_i,k)$ (
$X_i \sim EW(\alpha,\lambda_i,k)$ ( $i=1,2$) and
$i=1,2$) and  $Y_i \sim EW(\alpha,\mu_i,k)$ (
$Y_i \sim EW(\alpha,\mu_i,k)$ ( $i=1,2$). Set α = 0.6,
$i=1,2$). Set α = 0.6,  $(\lambda_1, \lambda_2)=(0.46,0.5)$ and
$(\lambda_1, \lambda_2)=(0.46,0.5)$ and  $(\mu_1, \mu_2)=(1.7,0.43)$. It is then easy to see that
$(\mu_1, \mu_2)=(1.7,0.43)$. It is then easy to see that  $(\mu_1, \mu_2)\stackrel{w}{\preceq} (\lambda_1,\lambda_2)$. Now, suppose we choose the Gumbel–Hougaard copula with parameters
$(\mu_1, \mu_2)\stackrel{w}{\preceq} (\lambda_1,\lambda_2)$. Now, suppose we choose the Gumbel–Hougaard copula with parameters  $\theta_1=8.9,$
$\theta_1=8.9,$  $\theta_2=3.05$ and k = 0.9. In this case, the distribution functions of
$\theta_2=3.05$ and k = 0.9. In this case, the distribution functions of  $X_{2:2}$ is given by
$X_{2:2}$ is given by
 \begin{equation*}F_{X_{2:2}} (x)=\exp\left\{1-\left(\left[1-\log\Bigg(\frac{1-e^{-(\lambda_1x)^k}}{1-\bar{\alpha}e^{-(\lambda_1x)^k}}\Bigg)\right]^{1/\theta_1}+\left[1-\log\Bigg( \frac{1-e^{-(\lambda_2x)^k}}{1-\bar{\alpha}e^{-(\lambda_2x)^k}} \Bigg)\right]^{1/\theta_1}-1\right)^{\theta_1} \right\} \end{equation*}
\begin{equation*}F_{X_{2:2}} (x)=\exp\left\{1-\left(\left[1-\log\Bigg(\frac{1-e^{-(\lambda_1x)^k}}{1-\bar{\alpha}e^{-(\lambda_1x)^k}}\Bigg)\right]^{1/\theta_1}+\left[1-\log\Bigg( \frac{1-e^{-(\lambda_2x)^k}}{1-\bar{\alpha}e^{-(\lambda_2x)^k}} \Bigg)\right]^{1/\theta_1}-1\right)^{\theta_1} \right\} \end{equation*} and the distribution function of  $Y_{2:2}$ can be similarly obtained upon replacing
$Y_{2:2}$ can be similarly obtained upon replacing  $(\lambda_{1},\lambda_{2})$ by
$(\lambda_{1},\lambda_{2})$ by  $(\mu_{1},\mu_{2}).$
Then,
$(\mu_{1},\mu_{2}).$
Then,  $F_{X_{2:2}}(x) \leq F_{Y_{2:2}}(x)$, for all
$F_{X_{2:2}}(x) \leq F_{Y_{2:2}}(x)$, for all  $x\geq 0$, as already proved in Theorem 3.2.
$x\geq 0$, as already proved in Theorem 3.2.
 A natural question that arises here is whether we can extend Theorem 3.2 for  $k\geq 1.$ The answer to this question is negative as the following counterexample illustrates.
$k\geq 1.$ The answer to this question is negative as the following counterexample illustrates.
Counterexample 3.2. Let  $X_i \sim EW(\alpha,\lambda_i,k)$ (
$X_i \sim EW(\alpha,\lambda_i,k)$ ( $i=1,2$) and
$i=1,2$) and  $Y_i \sim EW(\alpha,\mu_i,k)$ (
$Y_i \sim EW(\alpha,\mu_i,k)$ ( $i=1,2$). Set α = 0.6,
$i=1,2$). Set α = 0.6,  $(\lambda_1, \lambda_2)=(0.46,0.5)$ and
$(\lambda_1, \lambda_2)=(0.46,0.5)$ and  $(\mu_1, \mu_2)=(1.7,0.43)$. It is then evident that
$(\mu_1, \mu_2)=(1.7,0.43)$. It is then evident that  $(\mu_1, \mu_2)\stackrel{w}{\preceq} (\lambda_1,\lambda_2)$. Now, suppose we choose the Gumbel–Hougaard copula with parameters
$(\mu_1, \mu_2)\stackrel{w}{\preceq} (\lambda_1,\lambda_2)$. Now, suppose we choose the Gumbel–Hougaard copula with parameters  $\theta_1=8.9,$
$\theta_1=8.9,$  $\theta_2=3.05$ and k = 8.06 which violates the condition stated in Theorem 3.2. Figure 2 plots
$\theta_2=3.05$ and k = 8.06 which violates the condition stated in Theorem 3.2. Figure 2 plots  $F_{X_{2:2}}(x)$ and
$F_{X_{2:2}}(x)$ and  $F_{Y_{2:2}}(x)$, from which it is evident that when
$F_{Y_{2:2}}(x)$, from which it is evident that when  $k \geq 1$, the graph of
$k \geq 1$, the graph of  $F_{X_{2:2}}(x)$ intersects that of
$F_{X_{2:2}}(x)$ intersects that of  $F_{Y_{2:2}}(x) ,$ for some
$F_{Y_{2:2}}(x) ,$ for some  $x\geq 0.$
$x\geq 0.$

Figure 2. Plots of  ${F}_{X_{2:2}}(x)$ and
${F}_{X_{2:2}}(x)$ and  ${F}_{Y_{2:2}}(x)$ in Counterexample 3.2, where the red line corresponds to
${F}_{Y_{2:2}}(x)$ in Counterexample 3.2, where the red line corresponds to  ${F}_{X_{2:2}}(x)$ and the blue line corresponds to
${F}_{X_{2:2}}(x)$ and the blue line corresponds to  ${F}_{Y_{2:2}}(x)$.
${F}_{Y_{2:2}}(x)$.
Now, we establish another result for the case when the shape parameters are connected in majorization order.
Theorem 3.3. Let  $X_i\sim EW(\alpha, \lambda,k_i)\hspace{0.1in} (i=1,\ldots,n)$ and
$X_i\sim EW(\alpha, \lambda,k_i)\hspace{0.1in} (i=1,\ldots,n)$ and  $Y_i\sim EW(\alpha, \lambda,l_i)\hspace{0.1in} (i=1,\ldots,n)$ and the associated Archimedean copula be with generators ψ 1 and ψ 2, respectively. Further, let
$Y_i\sim EW(\alpha, \lambda,l_i)\hspace{0.1in} (i=1,\ldots,n)$ and the associated Archimedean copula be with generators ψ 1 and ψ 2, respectively. Further, let  $\phi_2\circ\psi_1$ be super-additive and
$\phi_2\circ\psi_1$ be super-additive and  $\alpha t\phi_1^{''}(t)+\phi_1^{'}(t)\geq 0$. Then, for
$\alpha t\phi_1^{''}(t)+\phi_1^{'}(t)\geq 0$. Then, for  $0 \lt \alpha\leq 1$, we have
$0 \lt \alpha\leq 1$, we have
 \begin{equation*} \boldsymbol{l}\succeq^{m}\boldsymbol{k} \Rightarrow X_{n:n}\succeq_{st}Y_{n:n}.\end{equation*}
\begin{equation*} \boldsymbol{l}\succeq^{m}\boldsymbol{k} \Rightarrow X_{n:n}\succeq_{st}Y_{n:n}.\end{equation*}Proof. Similar to Theorem 3.2, to establish the required result, we only need to prove that
 \begin{eqnarray*}
\Rightarrow\psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{2}(k_{m})\Big)\Big]&\leq& \psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{2}(l_{m})\Big)\Big].
\end{eqnarray*}
\begin{eqnarray*}
\Rightarrow\psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{2}(k_{m})\Big)\Big]&\leq& \psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{2}(l_{m})\Big)\Big].
\end{eqnarray*} where  $S_{2}(x)$ is as given in Theorem 3.2. For this purpose, let us define
$S_{2}(x)$ is as given in Theorem 3.2. For this purpose, let us define
 \begin{equation*}
\delta_3(\boldsymbol{k})=\psi_1\Big[\sum_{m=1}^n \phi_1 \Big(S_{2}(k_{m})\Big)\Big],
\end{equation*}
\begin{equation*}
\delta_3(\boldsymbol{k})=\psi_1\Big[\sum_{m=1}^n \phi_1 \Big(S_{2}(k_{m})\Big)\Big],
\end{equation*} where  $\boldsymbol{k}=(k_1,\ldots,k_n)$. Upon, differentiating
$\boldsymbol{k}=(k_1,\ldots,k_n)$. Upon, differentiating  $\delta_3(\boldsymbol{k})$ with respect to ki, we get
$\delta_3(\boldsymbol{k})$ with respect to ki, we get
 \begin{equation*}
\frac{\partial \delta_3(\boldsymbol{k})}{\partial k_i} = \psi_1^{'}\Big[\sum_{m=1}^n \phi_1 \Big(S_{2}(k_{m})\Big)\Big] \phi_1^{'} \Big(S_{2}(k_{i})\Big)\frac{(\alpha (\lambda x)^{k_i}\log(\lambda x)e^{-(\lambda x)^{k_i}})}{(1-\bar{\alpha}e^{-(\lambda x)^{k_i}})^2}.
\end{equation*}
\begin{equation*}
\frac{\partial \delta_3(\boldsymbol{k})}{\partial k_i} = \psi_1^{'}\Big[\sum_{m=1}^n \phi_1 \Big(S_{2}(k_{m})\Big)\Big] \phi_1^{'} \Big(S_{2}(k_{i})\Big)\frac{(\alpha (\lambda x)^{k_i}\log(\lambda x)e^{-(\lambda x)^{k_i}})}{(1-\bar{\alpha}e^{-(\lambda x)^{k_i}})^2}.
\end{equation*} Let us now define a function  $I_3(k_i)$ as
$I_3(k_i)$ as
 \begin{equation*}
I_3(k_i) = \phi_1^{'} \Big(S_{2}(k_{i})\Big)\frac{(\alpha (\lambda x)^{k_i}\log(\lambda x)e^{-(\lambda x)^{k_i}})}{(1-\bar{\alpha}e^{-(\lambda x)^{k_i}})^2}
\end{equation*}
\begin{equation*}
I_3(k_i) = \phi_1^{'} \Big(S_{2}(k_{i})\Big)\frac{(\alpha (\lambda x)^{k_i}\log(\lambda x)e^{-(\lambda x)^{k_i}})}{(1-\bar{\alpha}e^{-(\lambda x)^{k_i}})^2}
\end{equation*}which, upon differentiating with respect to ki, yields
 \begin{eqnarray*}
\frac{\partial I_3(k_i)}{\partial k_i}
&=& \phi_1^{''} \Big(S_{2}(k_{i})\Big)\bigg(\frac{(\alpha (\lambda x)^{k_i}\log(\lambda x)e^{(\lambda x)^{k_i}})}{(e^{(\lambda x)^{k_i}}-\bar{\alpha})^2}\bigg)^2 \\
&-& \phi_1^{'} \Big(S_{2}(k_{i})\Big)\frac{\alpha (\lambda x)^{k_i}\log(\lambda x)^2 e^{(\lambda x)^{k_i}\big(((\lambda x)^{k_i}-1)e^{(\lambda x)^{k_i}}+\bar{\alpha}(\lambda x)^{k_i}+\bar{\alpha}\big)}}{(e^{(\lambda x)^{k_i}}-\bar{\alpha})^3}\\
&=& \phi_1^{''} \Big(S_{2}(k_{i})\Big)\Big(S_{2}(k_{i})\Big)
- \phi_1^{'} \Big(S_{2}(k_{i})\Big)\frac{\big(((\lambda x)^{k_i}-1)e^{(\lambda x)^{k_i}}+\bar{\alpha}(\lambda x)^{k_i}+\bar{\alpha}\big)(1-e^{-(\lambda x)^{k_i}})}{\alpha (\lambda x)^{k_i}}.
\end{eqnarray*}
\begin{eqnarray*}
\frac{\partial I_3(k_i)}{\partial k_i}
&=& \phi_1^{''} \Big(S_{2}(k_{i})\Big)\bigg(\frac{(\alpha (\lambda x)^{k_i}\log(\lambda x)e^{(\lambda x)^{k_i}})}{(e^{(\lambda x)^{k_i}}-\bar{\alpha})^2}\bigg)^2 \\
&-& \phi_1^{'} \Big(S_{2}(k_{i})\Big)\frac{\alpha (\lambda x)^{k_i}\log(\lambda x)^2 e^{(\lambda x)^{k_i}\big(((\lambda x)^{k_i}-1)e^{(\lambda x)^{k_i}}+\bar{\alpha}(\lambda x)^{k_i}+\bar{\alpha}\big)}}{(e^{(\lambda x)^{k_i}}-\bar{\alpha})^3}\\
&=& \phi_1^{''} \Big(S_{2}(k_{i})\Big)\Big(S_{2}(k_{i})\Big)
- \phi_1^{'} \Big(S_{2}(k_{i})\Big)\frac{\big(((\lambda x)^{k_i}-1)e^{(\lambda x)^{k_i}}+\bar{\alpha}(\lambda x)^{k_i}+\bar{\alpha}\big)(1-e^{-(\lambda x)^{k_i}})}{\alpha (\lambda x)^{k_i}}.
\end{eqnarray*} Now, since for  $ 0\leq a\leq 1$ and
$ 0\leq a\leq 1$ and  $x\geq0$,
$x\geq0$,
 \begin{equation*}\frac{(1-x)e^x+ax+a}{x}(1-e^{-x})\geq -1,\end{equation*}
\begin{equation*}\frac{(1-x)e^x+ax+a}{x}(1-e^{-x})\geq -1,\end{equation*}we obtain
 \begin{equation*}\frac{\partial I_3(k_i)}{\partial k_i}\geq \phi_1^{''} \Big(\frac{1-e^{-(\lambda x)^{k_i}}}{1-\bar{\alpha}e^{-(\lambda x)^{k_i}}}\Big)\Big(\frac{1-e^{-(\lambda x)^{k_i}}}{1-\bar{\alpha}e^{-(\lambda x)^{k_i}}}\Big) + \phi_1^{'} \Big(\frac{1-e^{-(\lambda x)^{k_i}}}{1-\bar{\alpha}e^{-(\lambda x)^{k_i}}}\Big)\frac{1}{\alpha}\geq 0,\end{equation*}
\begin{equation*}\frac{\partial I_3(k_i)}{\partial k_i}\geq \phi_1^{''} \Big(\frac{1-e^{-(\lambda x)^{k_i}}}{1-\bar{\alpha}e^{-(\lambda x)^{k_i}}}\Big)\Big(\frac{1-e^{-(\lambda x)^{k_i}}}{1-\bar{\alpha}e^{-(\lambda x)^{k_i}}}\Big) + \phi_1^{'} \Big(\frac{1-e^{-(\lambda x)^{k_i}}}{1-\bar{\alpha}e^{-(\lambda x)^{k_i}}}\Big)\frac{1}{\alpha}\geq 0,\end{equation*} as  $\alpha t\phi_1^{''}(t)+\phi_1^{'}(t)\geq 0$, and
$\alpha t\phi_1^{''}(t)+\phi_1^{'}(t)\geq 0$, and  $I_3(k_i)$ is increasing in
$I_3(k_i)$ is increasing in  $k_i,$ for
$k_i,$ for  $i=1,\ldots,n.$ Now, for
$i=1,\ldots,n.$ Now, for  $i\neq j,$
$i\neq j,$
 \begin{align*}
&(k_i-k_j)\Big(\frac{\partial \delta_3(\boldsymbol{k})}{\partial k_{i}}-\frac{\partial \delta_{3}(\boldsymbol{k})}{\partial k_{j}}\Big)\nonumber\\
&=(k_i-k_j)\psi_{1}'\Big[\sum_{m=1}^{n}\phi_{1}\Big(S_{2}(k_{m})\Big)\Big][I_3(k_i)-I_3(k_j)]\leq 0,
\end{align*}
\begin{align*}
&(k_i-k_j)\Big(\frac{\partial \delta_3(\boldsymbol{k})}{\partial k_{i}}-\frac{\partial \delta_{3}(\boldsymbol{k})}{\partial k_{j}}\Big)\nonumber\\
&=(k_i-k_j)\psi_{1}'\Big[\sum_{m=1}^{n}\phi_{1}\Big(S_{2}(k_{m})\Big)\Big][I_3(k_i)-I_3(k_j)]\leq 0,
\end{align*} which implies  $\delta_3(\boldsymbol{k})$ is Schur-concave in k Lemma 2.1. This completes the proof the theorem.
$\delta_3(\boldsymbol{k})$ is Schur-concave in k Lemma 2.1. This completes the proof the theorem.
Remark 3.3. It is useful to observe that the condition “ $\phi_{2} \circ \psi_1$ is super-additive” in Theorem 3.5 is quite general and is easy to verify for many well-known Archimedean copulas. For example, we consider the copula
$\phi_{2} \circ \psi_1$ is super-additive” in Theorem 3.5 is quite general and is easy to verify for many well-known Archimedean copulas. For example, we consider the copula  $\phi(t)=e^{\frac{\theta}{t}}-e^{\theta}$, for
$\phi(t)=e^{\frac{\theta}{t}}-e^{\theta}$, for  $t\geq 0$, that satisfies the relation
$t\geq 0$, that satisfies the relation
 \begin{equation*}t\phi^{''}(t)+2\phi^{'}(t)\geq0,\end{equation*}
\begin{equation*}t\phi^{''}(t)+2\phi^{'}(t)\geq0,\end{equation*} where  $t\geq0$ and the inverse of
$t\geq0$ and the inverse of  $\phi(t)$ is
$\phi(t)$ is
 \begin{equation*}\psi(t)=\frac{\theta}{\log(t+e^{\theta})}.\end{equation*}
\begin{equation*}\psi(t)=\frac{\theta}{\log(t+e^{\theta})}.\end{equation*} Suppose we have two such copulas, but with parameters α and β, and we want to find the condition for  $\phi_{2} \circ \psi_1$ to be super-additive. Taking double-derivative of
$\phi_{2} \circ \psi_1$ to be super-additive. Taking double-derivative of  $\phi_{2} \circ \psi_1$ with respect to x, we get that
$\phi_{2} \circ \psi_1$ with respect to x, we get that
 \begin{equation*}(\phi_{2} \circ \psi_1)^{''}(x)=\left(\frac{\beta}{\alpha}\right)\left(\frac{\beta}{\alpha}-1\right)(t+e^{\theta_1})^{\frac{\beta}{\alpha}-2}.\end{equation*}
\begin{equation*}(\phi_{2} \circ \psi_1)^{''}(x)=\left(\frac{\beta}{\alpha}\right)\left(\frac{\beta}{\alpha}-1\right)(t+e^{\theta_1})^{\frac{\beta}{\alpha}-2}.\end{equation*}For illustrating the result in Theorem 3.3, let us consider the following example.
Example 3.2. Let  $X_i \sim EW(\alpha,\lambda,k_i)$ (
$X_i \sim EW(\alpha,\lambda,k_i)$ ( $i=1,2,3$) and
$i=1,2,3$) and  $Y_i \sim EW(\alpha,\lambda,l_i)$ (
$Y_i \sim EW(\alpha,\lambda,l_i)$ ( $i=1,2,3$). Set α = 0.5, λ = 4.83,
$i=1,2,3$). Set α = 0.5, λ = 4.83,  $(k_1, k_2, k_3)=(3, 0.5, 1)$ and
$(k_1, k_2, k_3)=(3, 0.5, 1)$ and  $(l_1, l_2, l_3)=(2, 1.5, 1)$. It is then easy to see that
$(l_1, l_2, l_3)=(2, 1.5, 1)$. It is then easy to see that  $ (k_1,k_2,k_3) \stackrel{m}{\preceq} (l_1, l_2, l_3)$. Now, suppose we choose the copula
$ (k_1,k_2,k_3) \stackrel{m}{\preceq} (l_1, l_2, l_3)$. Now, suppose we choose the copula  $\phi(t)=e^{\frac{\theta}{t}}-e^{\theta}$, for
$\phi(t)=e^{\frac{\theta}{t}}-e^{\theta}$, for  $t\geq 0$, that satisfies the relation
$t\geq 0$, that satisfies the relation
 \begin{equation*}\alpha t\phi_1^{''}(t)+\phi_1^{'}(t)\geq0,\end{equation*}
\begin{equation*}\alpha t\phi_1^{''}(t)+\phi_1^{'}(t)\geq0,\end{equation*} where  $t\geq0$, as α = 0.5 and the parameters
$t\geq0$, as α = 0.5 and the parameters  $\theta_1=2.2$ and
$\theta_1=2.2$ and  $\theta_2=2.45$ ensures the super-additivity of
$\theta_2=2.45$ ensures the super-additivity of  $\phi_{2} \circ \psi_1$. The distribution functions of
$\phi_{2} \circ \psi_1$. The distribution functions of  $X_{3:3}$ is
$X_{3:3}$ is
 \begin{equation}
F_{X_{3:3}} (x)=\frac{\theta_{1}}{\log\left(e^{\theta_1\frac{1-\bar{\alpha}{\mathrm{e}}^{-(\lambda x)^{k_{1}}}}{1-{\mathrm{e}}^{-(\lambda x)^{k_{1}}}}}+e^{\theta_1\frac{1-\bar{\alpha}{\mathrm{e}}^{-(\lambda x)^{k_{2}}}}{1-{\mathrm{e}}^{-(\lambda x)^{k_{2}}}}}+e^{\theta_1\frac{1-\bar{\alpha}{\mathrm{e}}^{-(\lambda x)^{k_{3}}}}{1-{\mathrm{e}}^{-(\lambda x)^{k_{3}}}}}-2\mathrm{e}^{\theta_{1}}\right)}
\end{equation}
\begin{equation}
F_{X_{3:3}} (x)=\frac{\theta_{1}}{\log\left(e^{\theta_1\frac{1-\bar{\alpha}{\mathrm{e}}^{-(\lambda x)^{k_{1}}}}{1-{\mathrm{e}}^{-(\lambda x)^{k_{1}}}}}+e^{\theta_1\frac{1-\bar{\alpha}{\mathrm{e}}^{-(\lambda x)^{k_{2}}}}{1-{\mathrm{e}}^{-(\lambda x)^{k_{2}}}}}+e^{\theta_1\frac{1-\bar{\alpha}{\mathrm{e}}^{-(\lambda x)^{k_{3}}}}{1-{\mathrm{e}}^{-(\lambda x)^{k_{3}}}}}-2\mathrm{e}^{\theta_{1}}\right)}
\end{equation} and the distribution function of  $Y_{3:3}$ can be similarly obtained upon replacing θ 1 by θ 2 and
$Y_{3:3}$ can be similarly obtained upon replacing θ 1 by θ 2 and  $(\alpha_{1},\alpha_{2},\alpha_{3})$ by
$(\alpha_{1},\alpha_{2},\alpha_{3})$ by  $(\beta_{1},\beta_{2},\beta_{3})$ in (3.8). Then,
$(\beta_{1},\beta_{2},\beta_{3})$ in (3.8). Then,  $F_{X_{3:3}}(x) \leq F_{Y_{3:3}}(x),$ for all
$F_{X_{3:3}}(x) \leq F_{Y_{3:3}}(x),$ for all  $x\geq 0,$ as established in Theorem 3.3.
$x\geq 0,$ as established in Theorem 3.3.
 It is useful to observe that the condition “ $\phi_{2} \circ \psi_1$ is super-additive” provides the copula with generator ψ 2 to be more positively dependent than the copula with generator
$\phi_{2} \circ \psi_1$ is super-additive” provides the copula with generator ψ 2 to be more positively dependent than the copula with generator  $\psi_1.$ In Theorem 3.3, we have considered
$\psi_1.$ In Theorem 3.3, we have considered  $\phi_{2} \circ \psi_1$ to be super-additive which is important to establish the inequality between the survival functions of
$\phi_{2} \circ \psi_1$ to be super-additive which is important to establish the inequality between the survival functions of  $X_{n:n}$ and
$X_{n:n}$ and  $Y_{n:n}$ when the parameters l and k are comparable in terms of majorization order. We now present a counterexample which allows us to show that if the condition is violated, then the theorem does not hold.
$Y_{n:n}$ when the parameters l and k are comparable in terms of majorization order. We now present a counterexample which allows us to show that if the condition is violated, then the theorem does not hold.
Counterexample 3.3. Let  $X_i \sim EW(\alpha,\lambda,k_i)$ (
$X_i \sim EW(\alpha,\lambda,k_i)$ ( $i=1,2,3$) and
$i=1,2,3$) and  $Y_i \sim EW(\alpha,\mu,l_i)$ (
$Y_i \sim EW(\alpha,\mu,l_i)$ ( $i=1,2,3$). Set α = 0.5, λ = 4.83,
$i=1,2,3$). Set α = 0.5, λ = 4.83,  $(k_1, k_2, k_3)=(0.5, 1 , 3)$ and
$(k_1, k_2, k_3)=(0.5, 1 , 3)$ and  $(l_1, l_2, l_3)=(1, 1.5, 2)$. It is easy to see
$(l_1, l_2, l_3)=(1, 1.5, 2)$. It is easy to see  $ (k_1,k_2,k_3) \stackrel{m}{\preceq} (l_1, l_2, l_3)$. Now, suppose we choose the copula
$ (k_1,k_2,k_3) \stackrel{m}{\preceq} (l_1, l_2, l_3)$. Now, suppose we choose the copula  $\phi(t)=e^{\frac{\theta}{t}}-e^{\theta}$, for
$\phi(t)=e^{\frac{\theta}{t}}-e^{\theta}$, for  $t\geq 0$, that satisfies
$t\geq 0$, that satisfies
 \begin{equation*}\alpha t\phi^{''}(t)+\phi^{'}(t)\geq0,\end{equation*}
\begin{equation*}\alpha t\phi^{''}(t)+\phi^{'}(t)\geq0,\end{equation*} where  $t\geq0$, as α = 0.5 and the parameters
$t\geq0$, as α = 0.5 and the parameters  $\theta_1=2.48$ and
$\theta_1=2.48$ and  $\theta_2=2.24$ violate the condition of super-additivity of
$\theta_2=2.24$ violate the condition of super-additivity of  $\phi_{2} \circ \psi_1$.
$\phi_{2} \circ \psi_1$.
 Figure 3 plots  $F_{X_{3:3}}(x)$ and
$F_{X_{3:3}}(x)$ and  $F_{Y_{3:3}}(x)$ from it, is evident that when the condition of super-additivity of
$F_{Y_{3:3}}(x)$ from it, is evident that when the condition of super-additivity of  $\phi_{2} \circ \psi_1$ is violated in Theorem 3.3,
$\phi_{2} \circ \psi_1$ is violated in Theorem 3.3,  $F_{X_{3:3}}(x)$ is greater than
$F_{X_{3:3}}(x)$ is greater than  $ F_{Y_{3:3}}(x) $ for some
$ F_{Y_{3:3}}(x) $ for some  $x\geq 0$.
$x\geq 0$.

Figure 3. Plots of  ${F}_{X_{3:3}}(x)$ and
${F}_{X_{3:3}}(x)$ and  ${F}_{Y_{3:3}}(x)$ in Counterexample 3.3. Here, the red line corresponds to
${F}_{Y_{3:3}}(x)$ in Counterexample 3.3. Here, the red line corresponds to  ${F}_{X_{3:3}}(x)$ and the blue line corresponds to
${F}_{X_{3:3}}(x)$ and the blue line corresponds to  ${F}_{Y_{3:3}}(x)$.
${F}_{Y_{3:3}}(x)$.
 In Theorem 3.3, if we replace the condition  $\alpha t\phi_1^{''}(t)+\phi_1^{'}(t)\geq 0$ by
$\alpha t\phi_1^{''}(t)+\phi_1^{'}(t)\geq 0$ by  $t\phi_1^{''}(t)+\phi_1^{'}(t)\geq 0$, then we can also compare the smallest order statistics
$t\phi_1^{''}(t)+\phi_1^{'}(t)\geq 0$, then we can also compare the smallest order statistics  $X_{1:n}$ and
$X_{1:n}$ and  $Y_{1:n}$ with respect to the usual stochastic order under the same conditions as in Theorem 3.3.
$Y_{1:n}$ with respect to the usual stochastic order under the same conditions as in Theorem 3.3.
Theorem 3.4. Let  $X_i\sim EW(\alpha, \lambda,k_i)\hspace{0.1in}(i=1,\ldots,n)$ and
$X_i\sim EW(\alpha, \lambda,k_i)\hspace{0.1in}(i=1,\ldots,n)$ and  $Y_i\sim EW(\alpha, \mu,l_i)\hspace{0.1in}(i=1,\ldots,n)$ and the associated Archimedean survival copulas be with generators ψ 1 and ψ 2, respectively. Also, let
$Y_i\sim EW(\alpha, \mu,l_i)\hspace{0.1in}(i=1,\ldots,n)$ and the associated Archimedean survival copulas be with generators ψ 1 and ψ 2, respectively. Also, let  $\phi_2\circ\psi_1$ be super-additive and
$\phi_2\circ\psi_1$ be super-additive and  $t\phi_1^{''}(t)+\phi_1^{'}(t)\geq 0$. Then, for
$t\phi_1^{''}(t)+\phi_1^{'}(t)\geq 0$. Then, for  $0 \lt \alpha\leq 1$, we have
$0 \lt \alpha\leq 1$, we have
 \begin{equation*} \boldsymbol{l}\succeq^{m}\boldsymbol{k} \Rightarrow Y_{1:n}\succeq_{st}X_{1:n}.\end{equation*}
\begin{equation*} \boldsymbol{l}\succeq^{m}\boldsymbol{k} \Rightarrow Y_{1:n}\succeq_{st}X_{1:n}.\end{equation*}Proof. As done in Theorem 3.1, to establish the required result, we only need to prove that
 \begin{equation*}
\psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{1}(k_{m})\Big)\Big]\leq \psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{1}(l_{m})\Big)\Big].
\end{equation*}
\begin{equation*}
\psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{1}(k_{m})\Big)\Big]\leq \psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{1}(l_{m})\Big)\Big].
\end{equation*}Let us define
 \begin{eqnarray*}
\delta_4(\boldsymbol{k})&=& 1-\psi_1\Big[\sum_{m=1}^n \phi_1 \Big(S_{1}(k_{m})\Big)\Big],
\end{eqnarray*}
\begin{eqnarray*}
\delta_4(\boldsymbol{k})&=& 1-\psi_1\Big[\sum_{m=1}^n \phi_1 \Big(S_{1}(k_{m})\Big)\Big],
\end{eqnarray*} where  $\boldsymbol{k}=(k_1,\ldots,k_n)$. Upon taking partial-derivative of
$\boldsymbol{k}=(k_1,\ldots,k_n)$. Upon taking partial-derivative of  $\delta_4(\boldsymbol{k})$ with respect to ki, we get
$\delta_4(\boldsymbol{k})$ with respect to ki, we get
 \begin{eqnarray*}
\frac{\partial \delta_4(\boldsymbol{k})}{\partial k_i} = -\psi_1^{'}\Big[\sum_{i=1}^n \phi_1 \Big(S_{1}(k_{m})\Big)\Big] \phi_1^{'} \Big(S_{1}(k_{i})\Big)\frac{((-1)\alpha (\lambda x)^{k_i}\log(\lambda x)e^{-(\lambda x)^{k_i}})}{(1-\bar{\alpha}e^{-(\lambda x)^{k_i}})^2}.
\end{eqnarray*}
\begin{eqnarray*}
\frac{\partial \delta_4(\boldsymbol{k})}{\partial k_i} = -\psi_1^{'}\Big[\sum_{i=1}^n \phi_1 \Big(S_{1}(k_{m})\Big)\Big] \phi_1^{'} \Big(S_{1}(k_{i})\Big)\frac{((-1)\alpha (\lambda x)^{k_i}\log(\lambda x)e^{-(\lambda x)^{k_i}})}{(1-\bar{\alpha}e^{-(\lambda x)^{k_i}})^2}.
\end{eqnarray*} Next, let us define a function  $I_4(k_i)$ as
$I_4(k_i)$ as
 \begin{eqnarray*}
I_4(k_i) = -\phi_1^{'} \Big(S_{1}(k_{i})\Big)\frac{((-1)\alpha (\lambda x)^{k_i}\log(\lambda x)e^{-(\lambda x)^{k_i}})}{(1-\bar{\alpha}e^{-(\lambda x)^{k_i}})^2},
\end{eqnarray*}
\begin{eqnarray*}
I_4(k_i) = -\phi_1^{'} \Big(S_{1}(k_{i})\Big)\frac{((-1)\alpha (\lambda x)^{k_i}\log(\lambda x)e^{-(\lambda x)^{k_i}})}{(1-\bar{\alpha}e^{-(\lambda x)^{k_i}})^2},
\end{eqnarray*}which upon taking partial-derivative with respect to ki yields
 \begin{eqnarray*}
\frac{\partial I_4(k_i)}{\partial k_i}
&=& -\phi_1^{''} \Big(S_{1}(k_{i})\Big)\bigg(\frac{(\alpha (\lambda x)^{k_i}\log(\lambda x)e^{(\lambda x)^{k_i}})}{(e^{(\lambda x)^{k_i}}-\bar{\alpha})^2}\bigg)^2 \\
&&+ \phi_1^{'} \Big(S_{1}(k_{i})\Big)\frac{\alpha (\lambda x)^{k_i}\log(\lambda x)^2 e^{(\lambda x)^{k_i}}\big(((\lambda x)^{k_i}-1)e^{(\lambda x)^{k_i}}+\bar{\alpha}(\lambda x)^{k_i}+\bar{\alpha}\big)}{(e^{(\lambda x)^{k_i}}-\bar{\alpha})^3}\\
&=& -\phi_1^{''} \Big(S_{1}(k_{i})\Big)S_{1}(k_{i}) + \phi_1^{'} \Big(S_{1}(k_{i})\Big)\frac{\big(((\lambda x)^{k_i}-1)e^{(\lambda x)^{k_i}}+\bar{\alpha}(\lambda x)^{k_i}+\bar{\alpha}\big)(e^{-(\lambda x)^{k_i}})}{(\lambda x)^{k_i}}.
\end{eqnarray*}
\begin{eqnarray*}
\frac{\partial I_4(k_i)}{\partial k_i}
&=& -\phi_1^{''} \Big(S_{1}(k_{i})\Big)\bigg(\frac{(\alpha (\lambda x)^{k_i}\log(\lambda x)e^{(\lambda x)^{k_i}})}{(e^{(\lambda x)^{k_i}}-\bar{\alpha})^2}\bigg)^2 \\
&&+ \phi_1^{'} \Big(S_{1}(k_{i})\Big)\frac{\alpha (\lambda x)^{k_i}\log(\lambda x)^2 e^{(\lambda x)^{k_i}}\big(((\lambda x)^{k_i}-1)e^{(\lambda x)^{k_i}}+\bar{\alpha}(\lambda x)^{k_i}+\bar{\alpha}\big)}{(e^{(\lambda x)^{k_i}}-\bar{\alpha})^3}\\
&=& -\phi_1^{''} \Big(S_{1}(k_{i})\Big)S_{1}(k_{i}) + \phi_1^{'} \Big(S_{1}(k_{i})\Big)\frac{\big(((\lambda x)^{k_i}-1)e^{(\lambda x)^{k_i}}+\bar{\alpha}(\lambda x)^{k_i}+\bar{\alpha}\big)(e^{-(\lambda x)^{k_i}})}{(\lambda x)^{k_i}}.
\end{eqnarray*} Now, since for  $ 0\leq a\leq 1$ and
$ 0\leq a\leq 1$ and  $x\geq0$,
$x\geq0$,
 \begin{equation*}\frac{(x-1)e^x+ax+a}{x}e^{-x}\leq 1,\end{equation*}
\begin{equation*}\frac{(x-1)e^x+ax+a}{x}e^{-x}\leq 1,\end{equation*}we obtain
 \begin{equation*}\frac{\partial I_4(k_i)}{\partial k_i}\geq -\phi_1^{''} \Big(S_{1}(k_{i})\Big)\Big(S_{1}(k_{i})\Big) + \phi_1^{'} \Big(S_{1}(k_{i})\Big),\end{equation*}
\begin{equation*}\frac{\partial I_4(k_i)}{\partial k_i}\geq -\phi_1^{''} \Big(S_{1}(k_{i})\Big)\Big(S_{1}(k_{i})\Big) + \phi_1^{'} \Big(S_{1}(k_{i})\Big),\end{equation*} and since  $t\phi_1^{''}(t)+\phi_1^{'}(t)\geq 0$, it shows
$t\phi_1^{''}(t)+\phi_1^{'}(t)\geq 0$, it shows  $I_4(k_i)$ is increasing in ki, for
$I_4(k_i)$ is increasing in ki, for  $i=1,\ldots,n$. Finally, for
$i=1,\ldots,n$. Finally, for  $i\neq j,$
$i\neq j,$
 \begin{align*}
&(k_i-k_j)\Big(\frac{\partial \delta_4(\boldsymbol{k})}{\partial k_{i}}-\frac{\partial \delta_4(\boldsymbol{k})}{\partial k_{j}}\Big)\nonumber\\
&=(k_i-k_j)\psi_{1}'\Big[\sum_{m=1}^{n}\phi_{1}\Big(S_{1}(k_{m})\Big)\Big][I_4(k_i)-I_4(k_j)]\leq 0,
\end{align*}
\begin{align*}
&(k_i-k_j)\Big(\frac{\partial \delta_4(\boldsymbol{k})}{\partial k_{i}}-\frac{\partial \delta_4(\boldsymbol{k})}{\partial k_{j}}\Big)\nonumber\\
&=(k_i-k_j)\psi_{1}'\Big[\sum_{m=1}^{n}\phi_{1}\Big(S_{1}(k_{m})\Big)\Big][I_4(k_i)-I_4(k_j)]\leq 0,
\end{align*} which implies  $\delta_4(\boldsymbol{k})$ is Schur-concave in k by Lemma 2.1. This completes the proof of the theorem.
$\delta_4(\boldsymbol{k})$ is Schur-concave in k by Lemma 2.1. This completes the proof of the theorem.
 In all the previous theorems, we have developed results concerning the usual stochastic order between two extremes, where the tilt parameters for both sets of variables are the same and scalar-valued. Next, we prove another result for comparing two parallel systems containing n number of dependent components following extended Weibull distribution wherein the dependency is modeled by Archimedean copulas having different generators and the tilt parameters are connected in weakly submajorization order. To establish the following theorem, we need  $\phi_2\circ\psi_1$ to be super-additive and
$\phi_2\circ\psi_1$ to be super-additive and  $t\phi_1^{''}(t)+2\phi_1^{'}(t)\geq 0,$ where ϕ 1 is the inverse of ψ 1.
$t\phi_1^{''}(t)+2\phi_1^{'}(t)\geq 0,$ where ϕ 1 is the inverse of ψ 1.
Theorem 3.5 Let  $X_i\sim EW(\alpha_i,\lambda,k)\hspace{0.1in} (i=1,\ldots,n)$ and
$X_i\sim EW(\alpha_i,\lambda,k)\hspace{0.1in} (i=1,\ldots,n)$ and  $Y_i\sim EW(\beta_i, \lambda, k)\hspace{0.1in}(i=1,\ldots,n)$ and the associated Archimedean copulas be with generators ψ 1 and ψ 2, respectively. Also, let
$Y_i\sim EW(\beta_i, \lambda, k)\hspace{0.1in}(i=1,\ldots,n)$ and the associated Archimedean copulas be with generators ψ 1 and ψ 2, respectively. Also, let  $\phi_2\circ\psi_1$ be super-additive and
$\phi_2\circ\psi_1$ be super-additive and
 \begin{equation*}
t\phi_1^{''}(t)+2\phi_1^{'}(t)\geq 0.
\end{equation*}
\begin{equation*}
t\phi_1^{''}(t)+2\phi_1^{'}(t)\geq 0.
\end{equation*}Then, we have
 \begin{equation*} \boldsymbol{\alpha}\succeq_{w}\boldsymbol{\beta} \Rightarrow X_{n:n}\succeq_{st}Y_{n:n}.\end{equation*}
\begin{equation*} \boldsymbol{\alpha}\succeq_{w}\boldsymbol{\beta} \Rightarrow X_{n:n}\succeq_{st}Y_{n:n}.\end{equation*}Proof. Using the same idea as in Theorem 3.3, to establish the required result, we only need to prove that
 \begin{equation*}
\psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{2}(\beta_{m})\Big)\Big]\geq \psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big((S_{2}(\alpha_{m}))\Big)\Big].
\end{equation*}
\begin{equation*}
\psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{2}(\beta_{m})\Big)\Big]\geq \psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big((S_{2}(\alpha_{m}))\Big)\Big].
\end{equation*} Let us define  $ \delta_5(\boldsymbol{\alpha})=\psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{2}(\alpha_{m})\Big)\Big].$ Now, differentiating
$ \delta_5(\boldsymbol{\alpha})=\psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{2}(\alpha_{m})\Big)\Big].$ Now, differentiating  $\delta_5(\boldsymbol{\alpha})$ with respect to αi, we get
$\delta_5(\boldsymbol{\alpha})$ with respect to αi, we get
 \begin{eqnarray*}
\frac{\partial \delta_5(\boldsymbol{\alpha})}{\partial \alpha_i}&=&-\psi_1^{'}\Big[\sum_{m=1}^n \phi_1 \Big(S_{2}(\alpha_{m})\Big)\Big]\frac{S_{2}(\alpha_{i})\phi_1^{'} \Big(S_{2}(\alpha_{i})\Big)e^{-(\lambda x)^k}}{(1-\bar{\alpha_i}e^{-(\lambda x)^k} )} \leq 0.
\end{eqnarray*}
\begin{eqnarray*}
\frac{\partial \delta_5(\boldsymbol{\alpha})}{\partial \alpha_i}&=&-\psi_1^{'}\Big[\sum_{m=1}^n \phi_1 \Big(S_{2}(\alpha_{m})\Big)\Big]\frac{S_{2}(\alpha_{i})\phi_1^{'} \Big(S_{2}(\alpha_{i})\Big)e^{-(\lambda x)^k}}{(1-\bar{\alpha_i}e^{-(\lambda x)^k} )} \leq 0.
\end{eqnarray*} For  $i=1,\dots,n$, let
$i=1,\dots,n$, let  $
\chi_5(\alpha_i)=-\frac{S_{2}(\alpha_{i})\phi_1^{'} \Big(S_{2}(\alpha_{i})\Big)e^{-(\lambda x)^k}}{(1-\bar{\alpha_i}e^{-(\lambda x)^k} )}.
$ Upon partial derivative of
$
\chi_5(\alpha_i)=-\frac{S_{2}(\alpha_{i})\phi_1^{'} \Big(S_{2}(\alpha_{i})\Big)e^{-(\lambda x)^k}}{(1-\bar{\alpha_i}e^{-(\lambda x)^k} )}.
$ Upon partial derivative of  $\chi_5(\alpha_i)$ with respect to αi, we get
$\chi_5(\alpha_i)$ with respect to αi, we get
 \begin{eqnarray*}
\frac{\partial \chi_5(\alpha_i)}{\partial \alpha_i}&=&\phi_1^{''}\Big(S_{2}(\alpha_{i})\Big)\Big(\frac{(-1)(1-e^{-(\lambda x)^k})e^{-(\lambda x)^k}}{(1-\bar{\alpha_i}e^{-(\lambda x)^k} )^2}\Big)^2+2\phi_1^{'}\Big(S_{2}(\alpha_{i})\Big)\frac{(1-e^{-(\lambda x)^{k}})e^{-2(\lambda x)^k}}{(1-\bar{\alpha_i}e^{-(\lambda x)^{k}})^3}\\
&=&\frac{(1-e^{-(\lambda x)^{k}})e^{-2(\lambda x)^k}}{(1-\bar{\alpha_i}e^{-(\lambda x)^{k}})^3}\Bigg[\phi_1^{''} \Big(S_{2}(\alpha_{i})\Big)S_{2}(\alpha_{i})+2\phi_1^{'} \Big(S_{2}(\alpha_{i})\Big)\Bigg]\geq 0,
\end{eqnarray*}
\begin{eqnarray*}
\frac{\partial \chi_5(\alpha_i)}{\partial \alpha_i}&=&\phi_1^{''}\Big(S_{2}(\alpha_{i})\Big)\Big(\frac{(-1)(1-e^{-(\lambda x)^k})e^{-(\lambda x)^k}}{(1-\bar{\alpha_i}e^{-(\lambda x)^k} )^2}\Big)^2+2\phi_1^{'}\Big(S_{2}(\alpha_{i})\Big)\frac{(1-e^{-(\lambda x)^{k}})e^{-2(\lambda x)^k}}{(1-\bar{\alpha_i}e^{-(\lambda x)^{k}})^3}\\
&=&\frac{(1-e^{-(\lambda x)^{k}})e^{-2(\lambda x)^k}}{(1-\bar{\alpha_i}e^{-(\lambda x)^{k}})^3}\Bigg[\phi_1^{''} \Big(S_{2}(\alpha_{i})\Big)S_{2}(\alpha_{i})+2\phi_1^{'} \Big(S_{2}(\alpha_{i})\Big)\Bigg]\geq 0,
\end{eqnarray*} since  $
t\phi_1^{''}(t)+2\phi_1^{'}(t)\geq 0.$ Hence,
$
t\phi_1^{''}(t)+2\phi_1^{'}(t)\geq 0.$ Hence,
 \begin{equation*}
(\alpha_i-\alpha_j)\Big(\frac{\partial \delta_5(\boldsymbol{\alpha})}{\partial \alpha_{i}}-\frac{\partial \delta_5(\boldsymbol{\alpha})}{\partial \alpha_{j}}\Big)
=(\alpha_i-\alpha_j)\psi_{1}'\Big[\sum_{m=1}^{n}\phi_{1}\Big(S_{2}(\alpha_{m})\Big)\Big][\chi_5(\alpha_i)-\chi_5(\alpha_j)]\leq 0 ,
\end{equation*}
\begin{equation*}
(\alpha_i-\alpha_j)\Big(\frac{\partial \delta_5(\boldsymbol{\alpha})}{\partial \alpha_{i}}-\frac{\partial \delta_5(\boldsymbol{\alpha})}{\partial \alpha_{j}}\Big)
=(\alpha_i-\alpha_j)\psi_{1}'\Big[\sum_{m=1}^{n}\phi_{1}\Big(S_{2}(\alpha_{m})\Big)\Big][\chi_5(\alpha_i)-\chi_5(\alpha_j)]\leq 0 ,
\end{equation*} and so,  $\delta_5(\boldsymbol{\alpha})$ is decreasing and Schur-concave in
$\delta_5(\boldsymbol{\alpha})$ is decreasing and Schur-concave in  $\alpha,$ from Lemma 2.1. This completes the proof of the theorem.
$\alpha,$ from Lemma 2.1. This completes the proof of the theorem.
Similarly, we can also derive conditions under which two series systems are comparable when the tilt parameter vectors are connected by weakly super-majorization order, as done in the following theorem.
Theorem 3.6 Let  $X_i\sim EW(\alpha_i,\lambda,k) $
$X_i\sim EW(\alpha_i,\lambda,k) $  $(i=1,\ldots,n)$ and
$(i=1,\ldots,n)$ and  $Y_i\sim EW(\beta_i, \lambda, k)$ for
$Y_i\sim EW(\beta_i, \lambda, k)$ for  $(i=1,\ldots,n)$ and the associated Archimedean survival copulas be with generators ψ 1 and ψ 2, respectively. Further, let
$(i=1,\ldots,n)$ and the associated Archimedean survival copulas be with generators ψ 1 and ψ 2, respectively. Further, let  $\phi_2\circ\psi_1$ be super-additive. Then,
$\phi_2\circ\psi_1$ be super-additive. Then,
 \begin{equation*} \boldsymbol{\alpha}\succeq^{w}\boldsymbol{\beta} \Rightarrow X_{1:n}\succeq_{st}Y_{1:n}.\end{equation*}
\begin{equation*} \boldsymbol{\alpha}\succeq^{w}\boldsymbol{\beta} \Rightarrow X_{1:n}\succeq_{st}Y_{1:n}.\end{equation*}Proof. First, to establish the required result, we only need to prove that
 \begin{equation*}
\psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{1}(\beta_{m})\Big)\Big]\geq \psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{1}(\alpha_{m})\Big)\Big].
\end{equation*}
\begin{equation*}
\psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{1}(\beta_{m})\Big)\Big]\geq \psi_{1}\Big[\sum_{m=1}^{n} \phi_{1}\Big(S_{1}(\alpha_{m})\Big)\Big].
\end{equation*} Let us define  $\delta_{51}(\boldsymbol{\alpha})=\psi_1\Big[\sum_{m=1}^n \phi_1 \Big(S_{1}(\alpha_{m})\Big)\Big].$ Upon differentiating
$\delta_{51}(\boldsymbol{\alpha})=\psi_1\Big[\sum_{m=1}^n \phi_1 \Big(S_{1}(\alpha_{m})\Big)\Big].$ Upon differentiating  $\delta_{51}(\boldsymbol{\alpha})$ with respect to αi, we get
$\delta_{51}(\boldsymbol{\alpha})$ with respect to αi, we get
 \begin{eqnarray*}
\frac{\partial \delta_{51}(\boldsymbol{\alpha})}{\partial \alpha_i}&=&\psi_1^{'}\Big[\sum_{i=1}^n \phi_1 \Big(S_{1}(\alpha_{m})\Big)\Big]\phi_1^{'} \Big(S_{1}(\alpha_{i})\Big)\frac{(1-e^{-(\lambda x)^k})e^{-(\lambda x)^k}}{(1-\bar{\alpha_i}e^{-(\lambda x)^k} )^2} \geq 0.
\end{eqnarray*}
\begin{eqnarray*}
\frac{\partial \delta_{51}(\boldsymbol{\alpha})}{\partial \alpha_i}&=&\psi_1^{'}\Big[\sum_{i=1}^n \phi_1 \Big(S_{1}(\alpha_{m})\Big)\Big]\phi_1^{'} \Big(S_{1}(\alpha_{i})\Big)\frac{(1-e^{-(\lambda x)^k})e^{-(\lambda x)^k}}{(1-\bar{\alpha_i}e^{-(\lambda x)^k} )^2} \geq 0.
\end{eqnarray*} Now, for  $i=1,\dots,n$, let
$i=1,\dots,n$, let  $\chi_{51}(\alpha_i)=\frac{\phi_1^{'} \Big(S_{1}(\alpha_{i})\Big)(1-e^{-(\lambda x)^k})e^{-(\lambda x)^k}}{(1-\bar{\alpha_i}e^{-(\lambda x)^k} )^2}.$
$\chi_{51}(\alpha_i)=\frac{\phi_1^{'} \Big(S_{1}(\alpha_{i})\Big)(1-e^{-(\lambda x)^k})e^{-(\lambda x)^k}}{(1-\bar{\alpha_i}e^{-(\lambda x)^k} )^2}.$
 Upon taking partial derivative of  $\chi_{51}(\alpha_i)$ with respect to αi, we get
$\chi_{51}(\alpha_i)$ with respect to αi, we get
 \begin{eqnarray*}
\frac{\partial \chi_{51}(\alpha_i)}{\partial \alpha_i}&=&\phi_1^{''}\Big(S_{1}(\alpha_{i})\Big)\Big(\frac{(1-e^{-(\lambda x)^k})e^{-(\lambda x)^k}}{(1-\bar{\alpha_i}e^{-(\lambda x)^k} )^2}\Big)^2-2\phi_1^{'}\Big(S_{1}(\alpha_{i})\Big)\frac{(1-e^{-(\lambda x)^{k}})e^{-2(\lambda x)^k}}{(1-\bar{\alpha_i}e^{-(\lambda x)^{k}})^3}\\
&=&\frac{(1-e^{-(\lambda x)^{k}})e^{-2(\lambda x)^k}}{(1-\bar{\alpha_i}e^{-(\lambda x)^{k}})^3}\Bigg[\phi_1^{''} \Big(S_{1}(\alpha_{i})\Big)S_{1}(\alpha_{i})-2\phi_1^{'} \Big(S_{1}(\alpha_{i})\Big)\Bigg]\geq 0.
\end{eqnarray*}
\begin{eqnarray*}
\frac{\partial \chi_{51}(\alpha_i)}{\partial \alpha_i}&=&\phi_1^{''}\Big(S_{1}(\alpha_{i})\Big)\Big(\frac{(1-e^{-(\lambda x)^k})e^{-(\lambda x)^k}}{(1-\bar{\alpha_i}e^{-(\lambda x)^k} )^2}\Big)^2-2\phi_1^{'}\Big(S_{1}(\alpha_{i})\Big)\frac{(1-e^{-(\lambda x)^{k}})e^{-2(\lambda x)^k}}{(1-\bar{\alpha_i}e^{-(\lambda x)^{k}})^3}\\
&=&\frac{(1-e^{-(\lambda x)^{k}})e^{-2(\lambda x)^k}}{(1-\bar{\alpha_i}e^{-(\lambda x)^{k}})^3}\Bigg[\phi_1^{''} \Big(S_{1}(\alpha_{i})\Big)S_{1}(\alpha_{i})-2\phi_1^{'} \Big(S_{1}(\alpha_{i})\Big)\Bigg]\geq 0.
\end{eqnarray*}Hence,
 \begin{align*}
(\alpha_i-\alpha_j)\Big(\frac{\partial \delta_{51}(\boldsymbol{\alpha})}{\partial \alpha_{i}}-\frac{\partial \delta_{51}(\boldsymbol{\alpha})}{\partial \alpha_{j}}\Big)
&=(\alpha_i-\alpha_j)\psi_{1}'\Big[\sum_{m=1}^{n}\phi_{1}\Big(S_{1}(\alpha_{m})\Big)\Big]\left[\frac{\partial \chi_{51}(\alpha_i)}{\partial \alpha_i}-\frac{\partial \chi_{51}(\alpha_j)}{\partial \alpha_j}\right]\leq 0
\end{align*}
\begin{align*}
(\alpha_i-\alpha_j)\Big(\frac{\partial \delta_{51}(\boldsymbol{\alpha})}{\partial \alpha_{i}}-\frac{\partial \delta_{51}(\boldsymbol{\alpha})}{\partial \alpha_{j}}\Big)
&=(\alpha_i-\alpha_j)\psi_{1}'\Big[\sum_{m=1}^{n}\phi_{1}\Big(S_{1}(\alpha_{m})\Big)\Big]\left[\frac{\partial \chi_{51}(\alpha_i)}{\partial \alpha_i}-\frac{\partial \chi_{51}(\alpha_j)}{\partial \alpha_j}\right]\leq 0
\end{align*} and so we have  $\delta_{51}(\boldsymbol{\alpha})$ to be increasing and Schur-concave in
$\delta_{51}(\boldsymbol{\alpha})$ to be increasing and Schur-concave in  $\alpha,$ from Lemma 2.1. This completes the proof of the theorem.
$\alpha,$ from Lemma 2.1. This completes the proof of the theorem.
For the purpose of illustrating Theorem 3.5, present the following example.
Example 3.3. Let  $X_i \sim EW(\alpha_i,\lambda,k)$ and
$X_i \sim EW(\alpha_i,\lambda,k)$ and  $Y_i \sim EW(\beta_i,\mu,k)$ for
$Y_i \sim EW(\beta_i,\mu,k)$ for  $i=1,2,3$. Set k = 5.67, λ = 5.37
$i=1,2,3$. Set k = 5.67, λ = 5.37  $(\alpha_1, \alpha_2, \alpha_3)=(0.4, 0.9, 0.1)$ and
$(\alpha_1, \alpha_2, \alpha_3)=(0.4, 0.9, 0.1)$ and  $(\beta_1, \beta_2, \beta_3)=(0.5, 0.8, 0.1)$. It is now easy to see that
$(\beta_1, \beta_2, \beta_3)=(0.5, 0.8, 0.1)$. It is now easy to see that  $(\alpha_1, \alpha_2, \alpha_3)\succeq_{w}(\beta_1, \beta_2, \beta_3)$. Now, suppose we choose the copula
$(\alpha_1, \alpha_2, \alpha_3)\succeq_{w}(\beta_1, \beta_2, \beta_3)$. Now, suppose we choose the copula  $\phi(t)=e^{\frac{\theta}{t}}-e^{\theta}$, for
$\phi(t)=e^{\frac{\theta}{t}}-e^{\theta}$, for  $t\geq 0$, that satisfies
$t\geq 0$, that satisfies
 \begin{equation*} t\phi^{''}(t)+2\phi^{'}(t)\geq0,\end{equation*}
\begin{equation*} t\phi^{''}(t)+2\phi^{'}(t)\geq0,\end{equation*} where  $t\geq0,$ and the parameters
$t\geq0,$ and the parameters  $\theta_1=2.4$ and
$\theta_1=2.4$ and  $\theta_2=3.7$ which ensure the super-additivity of
$\theta_2=3.7$ which ensure the super-additivity of  $\phi_{2} \circ \psi_1$. The distribution functions of
$\phi_{2} \circ \psi_1$. The distribution functions of  $X_{3:3}$ and
$X_{3:3}$ and  $Y_{3:3}$ are, respectively,
$Y_{3:3}$ are, respectively,
 \begin{equation*}
F_{X_{3:3}} (x)=\frac{\theta_{1}}{\log\left(e^{\theta_1\frac{1-\bar{\alpha_{1}}{\mathrm{e}}^{-(\lambda x)^{k}}}{1-{\mathrm{e}}^{-(\lambda x)^{k}}}}+e^{\theta_1\frac{1-\bar{\alpha_{2}}{\mathrm{e}}^{-(\lambda x)^{k}}}{1-{\mathrm{e}}^{-(\lambda x)^{k}}}}+e^{\theta_1\frac{1-\bar{\alpha_{3}}{\mathrm{e}}^{-(\lambda x)^{k}}}{1-{\mathrm{e}}^{-(\lambda x)^{k}}}}-2\mathrm{e}^{\theta_{1}}\right)}.
\end{equation*}
\begin{equation*}
F_{X_{3:3}} (x)=\frac{\theta_{1}}{\log\left(e^{\theta_1\frac{1-\bar{\alpha_{1}}{\mathrm{e}}^{-(\lambda x)^{k}}}{1-{\mathrm{e}}^{-(\lambda x)^{k}}}}+e^{\theta_1\frac{1-\bar{\alpha_{2}}{\mathrm{e}}^{-(\lambda x)^{k}}}{1-{\mathrm{e}}^{-(\lambda x)^{k}}}}+e^{\theta_1\frac{1-\bar{\alpha_{3}}{\mathrm{e}}^{-(\lambda x)^{k}}}{1-{\mathrm{e}}^{-(\lambda x)^{k}}}}-2\mathrm{e}^{\theta_{1}}\right)}.
\end{equation*} In a similar fashion, we can also obtain the distribution function of  $Y_{3:3}$ upon replacing
$Y_{3:3}$ upon replacing  $(\alpha_{1},\alpha_{2},\alpha_{3})$ by
$(\alpha_{1},\alpha_{2},\alpha_{3})$ by  $(\beta_{1},\beta_{2},\beta_{3})$ and θ 1 by
$(\beta_{1},\beta_{2},\beta_{3})$ and θ 1 by  $\theta_{2}.$ Then,
$\theta_{2}.$ Then,  $F_{X_{3:3}}(x) \leq F_{Y_{3:3}}(x) ,$ for all
$F_{X_{3:3}}(x) \leq F_{Y_{3:3}}(x) ,$ for all  $x\geq 0,$ as established in Theorem 3.5.
$x\geq 0,$ as established in Theorem 3.5.
 As in Counterexample 3.3, in the counterexample below, we show that if we violate the condition “ $\phi_2\circ\psi_1$ is super-additive” in Theorem 3.5, then the distribution functions of
$\phi_2\circ\psi_1$ is super-additive” in Theorem 3.5, then the distribution functions of  $X_{n:n}$ and
$X_{n:n}$ and  $Y_{n:n}$ cross each other.
$Y_{n:n}$ cross each other.
Counterexample 3.4. Let  $X_i \sim EW(\alpha_i,\lambda,k)$ and
$X_i \sim EW(\alpha_i,\lambda,k)$ and  $Y_i \sim EW(\beta_i,\mu,k),$ for
$Y_i \sim EW(\beta_i,\mu,k),$ for  $i=1,2,3$. Set k = 3.16, λ = 12.5,
$i=1,2,3$. Set k = 3.16, λ = 12.5,  $(\alpha_1, \alpha_2, \alpha_3)=(0.82,0.85,0.95)$ and
$(\alpha_1, \alpha_2, \alpha_3)=(0.82,0.85,0.95)$ and  $(\beta_1, \beta_2, \beta_3)=(0.4,0.84,0.87)$. It is easy to see that
$(\beta_1, \beta_2, \beta_3)=(0.4,0.84,0.87)$. It is easy to see that  $(\alpha_1, \alpha_2, \alpha_3)\succeq_{w}(\beta_1, \beta_2, \beta_3)$. Now, suppose we choose the copula
$(\alpha_1, \alpha_2, \alpha_3)\succeq_{w}(\beta_1, \beta_2, \beta_3)$. Now, suppose we choose the copula  $\phi(t)=e^{\frac{\theta}{t}}-e^{\theta}$, for
$\phi(t)=e^{\frac{\theta}{t}}-e^{\theta}$, for  $t\geq 0$, that satisfies
$t\geq 0$, that satisfies
 \begin{equation*} t\phi^{''}(t)+2\phi^{'}(t)\geq0,\end{equation*}
\begin{equation*} t\phi^{''}(t)+2\phi^{'}(t)\geq0,\end{equation*} where  $t\geq0,$ and the parameters
$t\geq0,$ and the parameters  $\theta_1=22.6$ and
$\theta_1=22.6$ and  $\theta_2=10.7$ which violate the condition of super-additivity of
$\theta_2=10.7$ which violate the condition of super-additivity of  $\phi_{2} \circ \psi_1$. Figure 4 plots
$\phi_{2} \circ \psi_1$. Figure 4 plots  $F_{X_{3:3}}(x)$ and
$F_{X_{3:3}}(x)$ and  $F_{Y_{3:3}}(x)$, and it is evident from it that when the condition of super-additivity of
$F_{Y_{3:3}}(x)$, and it is evident from it that when the condition of super-additivity of  $\phi_{2} \circ \psi_1$ is violated in Theorem 3.5,
$\phi_{2} \circ \psi_1$ is violated in Theorem 3.5,  $F_{X_{3:3}}(x)$ is greater than
$F_{X_{3:3}}(x)$ is greater than  $F_{Y_{3:3}}(x),$ for some
$F_{Y_{3:3}}(x),$ for some  $x\geq 0$.
$x\geq 0$.

Figure 4. Plots of  ${F}_{X_{3:3}}(x)$ and
${F}_{X_{3:3}}(x)$ and  ${F}_{Y_{3:3}}(x)$ as in Counterexample 3.4. Here, the red line corresponds to
${F}_{Y_{3:3}}(x)$ as in Counterexample 3.4. Here, the red line corresponds to  ${F}_{X_{3:3}}(x)$ and the blue line corresponds to
${F}_{X_{3:3}}(x)$ and the blue line corresponds to  ${F}_{Y_{3:3}}(x)$.
${F}_{Y_{3:3}}(x)$.
 Next, we establish another result with regard to the comparison of  $X_{1:n}$ and
$X_{1:n}$ and  $Y_{1:n}$ in terms of the hazard rate order where the Archimedean survival copula is taken as independence copula with same generators.
$Y_{1:n}$ in terms of the hazard rate order where the Archimedean survival copula is taken as independence copula with same generators.
Theorem 3.7. Let  $X_i\sim EW(\alpha,\lambda_{i},k)$ and
$X_i\sim EW(\alpha,\lambda_{i},k)$ and  $Y_i\sim EW(\alpha, \mu_{i}, k)$, for
$Y_i\sim EW(\alpha, \mu_{i}, k)$, for  $i=1,\ldots,n$, and the associated Archimedean survival copulas are with generators
$i=1,\ldots,n$, and the associated Archimedean survival copulas are with generators  $\psi_1=\psi_2=e^{-x}$, that is , Xi’s and Yi’s are independent random variables. Also, let
$\psi_1=\psi_2=e^{-x}$, that is , Xi’s and Yi’s are independent random variables. Also, let  $0 \lt \alpha\leq 1$ and
$0 \lt \alpha\leq 1$ and  ${k}\geq 1$. Then,
${k}\geq 1$. Then,
 \begin{equation*} \boldsymbol{\lambda}\succeq^{m}\boldsymbol{\mu} \Rightarrow Y_{1:n}\succeq_{hr}X_{1:n}.\end{equation*}
\begin{equation*} \boldsymbol{\lambda}\succeq^{m}\boldsymbol{\mu} \Rightarrow Y_{1:n}\succeq_{hr}X_{1:n}.\end{equation*}Proof. Under the independence copula, the hazard rate function of  $X_{1:n}$ is given by
$X_{1:n}$ is given by
 \begin{align}
r_{X_{1:n}}(x)&=\sum_{i=1}^{n}\frac{k\lambda_{i}(\lambda_{i} x)^{k-1}}{1-\bar{\alpha}e^{-(\lambda_{i} x)^{k}}}.
\end{align}
\begin{align}
r_{X_{1:n}}(x)&=\sum_{i=1}^{n}\frac{k\lambda_{i}(\lambda_{i} x)^{k-1}}{1-\bar{\alpha}e^{-(\lambda_{i} x)^{k}}}.
\end{align} By applying Lemma 2.5, it is easy to observe that  $r_{X_{1:n}}(x)$ is convex in
$r_{X_{1:n}}(x)$ is convex in  $\boldsymbol{\lambda}.$ Now, upon using Proposition
$\boldsymbol{\lambda}.$ Now, upon using Proposition  $C1$ of [Reference Marshall, Olkin and Arnold13], we observe that
$C1$ of [Reference Marshall, Olkin and Arnold13], we observe that  $r_{X_{1:n}}(x)$ is Schur-convex with respect to λ, which proves the theorem.
$r_{X_{1:n}}(x)$ is Schur-convex with respect to λ, which proves the theorem.
 We then present two more results concerning the hazard rate order and the reversed hazard rate order between the smallest order statistics when the tilt parameters are connected in weakly super-majorization order and under independence copula with generator  $\psi_1(x)=\psi_2(x)=e^{-x},~x \gt 0.$ The proofs of two results can be completed by using Lemma 3.2 of [Reference Balakrishnan, Barmalzan and Haidari2].
$\psi_1(x)=\psi_2(x)=e^{-x},~x \gt 0.$ The proofs of two results can be completed by using Lemma 3.2 of [Reference Balakrishnan, Barmalzan and Haidari2].
Theorem 3.8. Let  $X_i\sim EW(\alpha_{i},\lambda,k)$ and
$X_i\sim EW(\alpha_{i},\lambda,k)$ and  $Y_i\sim EW(\beta_{i}, \lambda, k)$, for
$Y_i\sim EW(\beta_{i}, \lambda, k)$, for  $i=1,\ldots,n$, and the associated Archimedean (survival) copulas be with generators
$i=1,\ldots,n$, and the associated Archimedean (survival) copulas be with generators  $\psi_1=\psi_2=e^{-x}$, respectively. Then,
$\psi_1=\psi_2=e^{-x}$, respectively. Then,
 \begin{equation*} \boldsymbol{\alpha}\succeq^{w}\boldsymbol{\beta} \Rightarrow Y_{n:n}\succeq_{rh}X_{n:n}(Y_{1:n}\succeq_{hr}X_{1:n}).\end{equation*}
\begin{equation*} \boldsymbol{\alpha}\succeq^{w}\boldsymbol{\beta} \Rightarrow Y_{n:n}\succeq_{rh}X_{n:n}(Y_{1:n}\succeq_{hr}X_{1:n}).\end{equation*}We now derive some conditions on model parameters, for comparing the extremes with respect to the dispersive, star and Lorenz orders, when the variables are dependent and follow extended Weibull distributions structured with Archimedean copula having the same generator.
Before stating our main results, we present the following two lemmas which will be used to prove the main results.
Lemma 3.1. ([Reference Saunders and Moran18])
 Let Xa be a random variable with distribution function Fa, for each  $a\in(0,\infty),$ such that
$a\in(0,\infty),$ such that
- (i) Fa is supported on some interval  $(x^{(a)}_{-},x^{(a)}_{+})\subset(0,\infty)$ and has density fa which does not vanish in any subinterval of $(x^{(a)}_{-},x^{(a)}_{+})\subset(0,\infty)$ and has density fa which does not vanish in any subinterval of $(x^{(a)}_{-},x^{(a)}_{+});$ $(x^{(a)}_{-},x^{(a)}_{+});$
- (ii) The derivative of Fa with respect to a exists and is denoted by  $F^{'}_{a}.$ $F^{'}_{a}.$
 Then,  $X_{a}\geq_{*}X_{a^{*}},$ for
$X_{a}\geq_{*}X_{a^{*}},$ for  $a,~a^{*}\in (0,\infty)$ and
$a,~a^{*}\in (0,\infty)$ and  $a \gt a^{*},$ iff
$a \gt a^{*},$ iff  $F^{'}_{a}(x)/xf_{a}(x)$ is decreasing in
$F^{'}_{a}(x)/xf_{a}(x)$ is decreasing in  $x.$
$x.$
We now establish some sufficient conditions for the comparison of two extremes in the sense of star order, with the first result being for parallel systems and the second one being for series systems.
Theorem 3.9 Let  $X_i \sim EW (\alpha,\lambda_1,k)$
$X_i \sim EW (\alpha,\lambda_1,k)$  $(i=1,\ldots,p)$ and
$(i=1,\ldots,p)$ and  $X_j \sim EW (\alpha,\lambda_2,k)$
$X_j \sim EW (\alpha,\lambda_2,k)$  $(j=p+1,\ldots,n)$, and
$(j=p+1,\ldots,n)$, and  $Y_i \sim EW (\alpha,\mu_1,k)$
$Y_i \sim EW (\alpha,\mu_1,k)$  $(i=1,\ldots,p)$ and
$(i=1,\ldots,p)$ and  $Y_j \sim EW (\alpha,\mu_2,k )$
$Y_j \sim EW (\alpha,\mu_2,k )$  $(j=p+1,\ldots,n)$ be variables with a common Archimedean copula having generator ψ. Then, if
$(j=p+1,\ldots,n)$ be variables with a common Archimedean copula having generator ψ. Then, if
 \begin{equation*}
(\alpha+\bar{\alpha}t)\log\left(\frac{t}{\alpha+\bar{\alpha}t}\right)\left[\alpha+2\bar{\alpha}t-\frac{(\alpha+\bar{\alpha}
t)t\phi''(1-t)}{\phi'(1-t)}\right]
\end{equation*}
\begin{equation*}
(\alpha+\bar{\alpha}t)\log\left(\frac{t}{\alpha+\bar{\alpha}t}\right)\left[\alpha+2\bar{\alpha}t-\frac{(\alpha+\bar{\alpha}
t)t\phi''(1-t)}{\phi'(1-t)}\right]
\end{equation*} is increasing with respect to  $t \in [0,1]$ and
$t \in [0,1]$ and  $0 \lt k\leq 1$, we have
$0 \lt k\leq 1$, we have
 \begin{eqnarray*}
(\lambda_1-\lambda_2)(\mu_1-\mu_2) \ge 0\,\,\,\,\,and\,\,\,\,\,\,
\frac{\lambda_{2:2}}{\lambda_{1:2}} \ge \frac{\mu_{2;2}}{\mu_{1:2}}
\Longrightarrow
Y_{n:n} \le_{*} X_{n:n},
\end{eqnarray*}
\begin{eqnarray*}
(\lambda_1-\lambda_2)(\mu_1-\mu_2) \ge 0\,\,\,\,\,and\,\,\,\,\,\,
\frac{\lambda_{2:2}}{\lambda_{1:2}} \ge \frac{\mu_{2;2}}{\mu_{1:2}}
\Longrightarrow
Y_{n:n} \le_{*} X_{n:n},
\end{eqnarray*} where  $p+q=n$.
$p+q=n$.
Proof. Assume that  $(\lambda_1-\lambda_2)(\mu_1-\mu_2) \ge 0$. Now, without loss of generality, let us assume that
$(\lambda_1-\lambda_2)(\mu_1-\mu_2) \ge 0$. Now, without loss of generality, let us assume that  $\lambda_1\le\lambda_2$ and
$\lambda_1\le\lambda_2$ and  $\mu_1\le\mu_2$. The distribution function of
$\mu_1\le\mu_2$. The distribution function of  $X_{n:n}$ is
$X_{n:n}$ is
 \begin{equation}
F_{X_{n:n}}(x)=\psi\left[p\phi\left(\frac{1-e^{- (\lambda_1 x)^k}}{1-\bar{\alpha}e^{- (\lambda_1 x)^k}}
\right)+q\phi\left(\frac{1-e^{-(\lambda_2 x)^k}}{1-\bar{\alpha} e^{-(\lambda_2 x)^k}}\right)\right], \qquad x\in(0,\infty),
\end{equation}
\begin{equation}
F_{X_{n:n}}(x)=\psi\left[p\phi\left(\frac{1-e^{- (\lambda_1 x)^k}}{1-\bar{\alpha}e^{- (\lambda_1 x)^k}}
\right)+q\phi\left(\frac{1-e^{-(\lambda_2 x)^k}}{1-\bar{\alpha} e^{-(\lambda_2 x)^k}}\right)\right], \qquad x\in(0,\infty),
\end{equation} and the distribution function of  $Y_{n:n}$ can be obtained from (3.10) upon replacing
$Y_{n:n}$ can be obtained from (3.10) upon replacing  $(\lambda_{1},\lambda_{2})$ by
$(\lambda_{1},\lambda_{2})$ by  $(\mu_{1},\mu_{2})$, where
$(\mu_{1},\mu_{2})$, where  $q=n-p$. In this case, the proof can be completed by considering the following two cases.
$q=n-p$. In this case, the proof can be completed by considering the following two cases.
 Case (i):  $\lambda_1+\lambda_2=\mu_1+\mu_2.$ For convenience, let us assume that
$\lambda_1+\lambda_2=\mu_1+\mu_2.$ For convenience, let us assume that  $\lambda_1+\lambda_2=\mu_1+\mu_2=1$. Set
$\lambda_1+\lambda_2=\mu_1+\mu_2=1$. Set  $\lambda_2=\lambda$,
$\lambda_2=\lambda$,  $\mu_2=\mu$,
$\mu_2=\mu$,  $\lambda_1=1-\lambda$ and
$\lambda_1=1-\lambda$ and  $\mu_1=1-\mu$. Then, under this setting, the distribution functions of
$\mu_1=1-\mu$. Then, under this setting, the distribution functions of  $X_{n:n}$ and
$X_{n:n}$ and  $Y_{n:n}$ are
$Y_{n:n}$ are
 \begin{eqnarray*}
F_{\lambda}(x)=\psi\left[p\phi\left(\frac{1-e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}}
\right)+q\phi\left(\frac{1-e^{-\lambda^k x^k}}{1-\bar{\alpha} e^{-\lambda^k x^k}}\right)\right], \qquad x\in(0,\infty),
\end{eqnarray*}
\begin{eqnarray*}
F_{\lambda}(x)=\psi\left[p\phi\left(\frac{1-e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}}
\right)+q\phi\left(\frac{1-e^{-\lambda^k x^k}}{1-\bar{\alpha} e^{-\lambda^k x^k}}\right)\right], \qquad x\in(0,\infty),
\end{eqnarray*} \begin{eqnarray*}
F_{\mu}(x)=\psi\left[p\phi\left(\frac{1-e^{-(1-\mu)^k x^k}}{1-\bar{\alpha}e^{-(1-\mu)^k x^k}}
\right)+q\phi\left(\frac{1-e^{-\mu^k x^k}}{1-\bar{\alpha} e^{-\mu^k x^k}}\right)\right], \qquad x\in(0,\infty),
\end{eqnarray*}
\begin{eqnarray*}
F_{\mu}(x)=\psi\left[p\phi\left(\frac{1-e^{-(1-\mu)^k x^k}}{1-\bar{\alpha}e^{-(1-\mu)^k x^k}}
\right)+q\phi\left(\frac{1-e^{-\mu^k x^k}}{1-\bar{\alpha} e^{-\mu^k x^k}}\right)\right], \qquad x\in(0,\infty),
\end{eqnarray*} respectively. Now, to obtain the required result, it is sufficient to show that  $\frac{F'_\lambda(x)}{x f_\lambda (x)}$ is decreasing in
$\frac{F'_\lambda(x)}{x f_\lambda (x)}$ is decreasing in  $x \in(0,\infty),$ for
$x \in(0,\infty),$ for  $\lambda \in(1/2,1]$ by Lemma 3.1. The derivative of Fλ with respect to λ is
$\lambda \in(1/2,1]$ by Lemma 3.1. The derivative of Fλ with respect to λ is
 \begin{eqnarray*}
F'_\lambda(x)&=&\psi'\left[p\phi\left(\frac{1-e^{-(1-\lambda)^{k} x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^{k} x^k}}
\right)+q\phi\left(\frac{1-e^{-\lambda^k x^k}}{1-\bar{\alpha} e^{-\lambda^k x^k}}\right)\right]\\
&&\times \left[\frac{-p\alpha x^k e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha} e^{-(1-\lambda)^k x^k})^2}
\phi'\left(\frac{1-e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}} \right)k(1-\lambda)^{k-1}\right.\\
&&\left.+\,q\frac{\alpha xe^{-\lambda^k x^k}}{(1-\bar{\alpha}e^{-\lambda^k x^k})^2}\phi'\left(\frac{1-e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}\right)k\lambda^{k-1}\right].
\end{eqnarray*}
\begin{eqnarray*}
F'_\lambda(x)&=&\psi'\left[p\phi\left(\frac{1-e^{-(1-\lambda)^{k} x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^{k} x^k}}
\right)+q\phi\left(\frac{1-e^{-\lambda^k x^k}}{1-\bar{\alpha} e^{-\lambda^k x^k}}\right)\right]\\
&&\times \left[\frac{-p\alpha x^k e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha} e^{-(1-\lambda)^k x^k})^2}
\phi'\left(\frac{1-e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}} \right)k(1-\lambda)^{k-1}\right.\\
&&\left.+\,q\frac{\alpha xe^{-\lambda^k x^k}}{(1-\bar{\alpha}e^{-\lambda^k x^k})^2}\phi'\left(\frac{1-e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}\right)k\lambda^{k-1}\right].
\end{eqnarray*}On the other hand, the density function corresponding to Fλ has the form
 \begin{eqnarray*}
f_\lambda(x)&=&\psi'\left[p\phi\left(\frac{1-e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}}
\right)+q\phi\left(\frac{1-e^{-\lambda^k x^k}}{1-\bar{\alpha} e^{-\lambda^k x^k}}\right)\right]\\
&&\times \left[\frac{p\alpha (1-\lambda)^k e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha} e^{-(1-\lambda)^k x^k})^2}
\phi'\left(\frac{1-e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}} \right)\right.\\
&& \left. +\,\frac{q\alpha \lambda^k e^{-\lambda^k x^k}}{(1-\bar{\alpha}e^{-\lambda^k x^k})^2}\phi'\left(\frac{1-e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}\right)\right]kx^{k-1}.
\end{eqnarray*}
\begin{eqnarray*}
f_\lambda(x)&=&\psi'\left[p\phi\left(\frac{1-e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}}
\right)+q\phi\left(\frac{1-e^{-\lambda^k x^k}}{1-\bar{\alpha} e^{-\lambda^k x^k}}\right)\right]\\
&&\times \left[\frac{p\alpha (1-\lambda)^k e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha} e^{-(1-\lambda)^k x^k})^2}
\phi'\left(\frac{1-e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}} \right)\right.\\
&& \left. +\,\frac{q\alpha \lambda^k e^{-\lambda^k x^k}}{(1-\bar{\alpha}e^{-\lambda^k x^k})^2}\phi'\left(\frac{1-e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}\right)\right]kx^{k-1}.
\end{eqnarray*}So, we have
 \begin{eqnarray*}
\frac{F'_\alpha(x)}{xf_\alpha(x)}&=&\frac{\frac{-p\alpha x^k e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha} e^{-(1-\lambda)^k x^k})^2}
\phi'\left(\frac{1-e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}} \right)k(1-\lambda)^{k-1}+q\frac{\alpha xe^{-\lambda^k x^k}}{(1-\bar{\alpha}e^{-\lambda^k x^k})^2}\phi'\left(\frac{1-e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}\right)k\lambda^{k-1}}{\frac{p\alpha (1-\lambda)^k e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha} e^{-(1-\lambda)^k x^k})^2}
\phi'\left(\frac{1-e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}} \right)kx^k+\frac{q\alpha \lambda^k e^{-\lambda^k x^k}}{(1-\bar{\alpha}e^{-\lambda^k x^k})^2}\phi'\left(\frac{1-e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}\right)kx^{k}}\\
&=&\left(\lambda+\frac{p\alpha x^k\frac{e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha} e^{-(1-\lambda)^k x^k})^2}\phi'\left(\frac{1-e^{(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{(1-\lambda)^k x^k}}\right)k(1-\lambda)^{k-1}}{\frac{-p\alpha x^k e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha} e^{-(1-\lambda)^k x^k})^2}
\phi'\left(\frac{1-e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}} \right)k(1-\lambda)^{k-1}+q\frac{\alpha xe^{-\lambda^k x^k}}{(1-\bar{\alpha}e^{-\lambda^k x^k})^2}\phi'\left(\frac{1-e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}\right)k\lambda^{k-1}}\right)^{-1}\\
&=&\left(\lambda+\left(\frac{\frac{qe^{-\lambda^k x^k}}{(1-\bar{\alpha}e^{-\lambda^k x^k})^2}\phi'\left(\frac{1-e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}\right)\lambda^{k-1}}{\frac{p e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha} e^{-(1-\lambda)^k x^k})^2}\phi'\left(\frac{1-e^{(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{(1-\lambda)^k x^k}}\right)(1-\lambda)^{k-1}}-1 \right)^{-1}\right)^{-1}.
\end{eqnarray*}
\begin{eqnarray*}
\frac{F'_\alpha(x)}{xf_\alpha(x)}&=&\frac{\frac{-p\alpha x^k e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha} e^{-(1-\lambda)^k x^k})^2}
\phi'\left(\frac{1-e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}} \right)k(1-\lambda)^{k-1}+q\frac{\alpha xe^{-\lambda^k x^k}}{(1-\bar{\alpha}e^{-\lambda^k x^k})^2}\phi'\left(\frac{1-e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}\right)k\lambda^{k-1}}{\frac{p\alpha (1-\lambda)^k e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha} e^{-(1-\lambda)^k x^k})^2}
\phi'\left(\frac{1-e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}} \right)kx^k+\frac{q\alpha \lambda^k e^{-\lambda^k x^k}}{(1-\bar{\alpha}e^{-\lambda^k x^k})^2}\phi'\left(\frac{1-e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}\right)kx^{k}}\\
&=&\left(\lambda+\frac{p\alpha x^k\frac{e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha} e^{-(1-\lambda)^k x^k})^2}\phi'\left(\frac{1-e^{(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{(1-\lambda)^k x^k}}\right)k(1-\lambda)^{k-1}}{\frac{-p\alpha x^k e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha} e^{-(1-\lambda)^k x^k})^2}
\phi'\left(\frac{1-e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}} \right)k(1-\lambda)^{k-1}+q\frac{\alpha xe^{-\lambda^k x^k}}{(1-\bar{\alpha}e^{-\lambda^k x^k})^2}\phi'\left(\frac{1-e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}\right)k\lambda^{k-1}}\right)^{-1}\\
&=&\left(\lambda+\left(\frac{\frac{qe^{-\lambda^k x^k}}{(1-\bar{\alpha}e^{-\lambda^k x^k})^2}\phi'\left(\frac{1-e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}\right)\lambda^{k-1}}{\frac{p e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha} e^{-(1-\lambda)^k x^k})^2}\phi'\left(\frac{1-e^{(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{(1-\lambda)^k x^k}}\right)(1-\lambda)^{k-1}}-1 \right)^{-1}\right)^{-1}.
\end{eqnarray*} Thus, it suffices to show that, for  $\lambda\in (1/2,1]$,
$\lambda\in (1/2,1]$,
 \begin{equation*}
\Delta(x)=\frac{\frac{e^{-\lambda^k x^k}}{(1-\bar{\alpha}e^{-\lambda^k x^k})^2}\phi'\left(\frac{1-e^{-\lambda x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}\right)\lambda^{k-1}}{\frac{e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha} e^{-(1-\lambda)^k x^k})^2}\phi'\left(\frac{1-e^{(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{(1-\lambda)^k x^k}}\right)(1-\lambda)^{k-1}}
\end{equation*}
\begin{equation*}
\Delta(x)=\frac{\frac{e^{-\lambda^k x^k}}{(1-\bar{\alpha}e^{-\lambda^k x^k})^2}\phi'\left(\frac{1-e^{-\lambda x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}\right)\lambda^{k-1}}{\frac{e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha} e^{-(1-\lambda)^k x^k})^2}\phi'\left(\frac{1-e^{(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{(1-\lambda)^k x^k}}\right)(1-\lambda)^{k-1}}
\end{equation*} is decreasing in  $x\in(0,\infty)$. Now, let us set
$x\in(0,\infty)$. Now, let us set  $t_1=\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}$ and
$t_1=\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}$ and  $t_2=\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}}$. From the fact that
$t_2=\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}}$. From the fact that  $\lambda \in (1/2, 1]$, we have
$\lambda \in (1/2, 1]$, we have  $t_1 \lt t_2$ for all
$t_1 \lt t_2$ for all  $x\in (0,\infty)$, and so
$x\in (0,\infty)$, and so
 \begin{eqnarray*}
\Delta(x)&=&\frac{t_1^2\phi'(1-t_1)e^{\lambda^k x^k}}{t_2^2\phi'(1-t_2)e^{(1-\lambda)^k x^k}}\\
&=&\frac{t_1(\alpha+\bar{\alpha}t_1)\phi'(1-t_1)}{t_2(\alpha+\bar{\alpha}t_2)\phi'(1-t_2)}
\end{eqnarray*}
\begin{eqnarray*}
\Delta(x)&=&\frac{t_1^2\phi'(1-t_1)e^{\lambda^k x^k}}{t_2^2\phi'(1-t_2)e^{(1-\lambda)^k x^k}}\\
&=&\frac{t_1(\alpha+\bar{\alpha}t_1)\phi'(1-t_1)}{t_2(\alpha+\bar{\alpha}t_2)\phi'(1-t_2)}
\end{eqnarray*} from which we get the derivative of  $\Delta(x)$ with respect to x to be
$\Delta(x)$ with respect to x to be
 \begin{eqnarray*}
\Delta'(x)&\stackrel{sgn}{=}&\left[t_1'(\alpha+\bar{\alpha}t_1)\phi'(1-t_1)-t_1(\alpha+\bar{\alpha}t_1)t_1'\phi''(1-t_1)+\bar{\alpha}t_1t_1'\phi'(1-t_1)\right]\\
&&\times\, t_2(\alpha+\bar{\alpha}t_2)\phi'(1-t_2)-\left[t_2'(\alpha+\bar{\alpha}t_2)\phi'(1-t_2)-t_2(\alpha+\bar{\alpha}t_2)t_2'\phi''(1-t_2)\right.\\
&&\left.+\,\bar{\alpha}t_2t_2'\phi'(1-t_2)\right]\times t_1(\alpha+\bar{\alpha}t_1)\phi'(1-t_1)\\
&\stackrel{sgn}{=}&\frac{t_1'}{t_1}\left[\alpha+2\bar{\alpha}t_1-\frac{(\alpha+\bar{\alpha}
t_1)t_1\phi''(1-t_1)}{\phi'(1-t_1)}\right]-\frac{t_2'}{t_2}\left[\alpha+2\bar{\alpha}t_2-\frac{(\alpha+\bar{\alpha}
t_2)t_2\phi''(1-t_2)}{\phi'(1-t_2)}\right].
\end{eqnarray*}
\begin{eqnarray*}
\Delta'(x)&\stackrel{sgn}{=}&\left[t_1'(\alpha+\bar{\alpha}t_1)\phi'(1-t_1)-t_1(\alpha+\bar{\alpha}t_1)t_1'\phi''(1-t_1)+\bar{\alpha}t_1t_1'\phi'(1-t_1)\right]\\
&&\times\, t_2(\alpha+\bar{\alpha}t_2)\phi'(1-t_2)-\left[t_2'(\alpha+\bar{\alpha}t_2)\phi'(1-t_2)-t_2(\alpha+\bar{\alpha}t_2)t_2'\phi''(1-t_2)\right.\\
&&\left.+\,\bar{\alpha}t_2t_2'\phi'(1-t_2)\right]\times t_1(\alpha+\bar{\alpha}t_1)\phi'(1-t_1)\\
&\stackrel{sgn}{=}&\frac{t_1'}{t_1}\left[\alpha+2\bar{\alpha}t_1-\frac{(\alpha+\bar{\alpha}
t_1)t_1\phi''(1-t_1)}{\phi'(1-t_1)}\right]-\frac{t_2'}{t_2}\left[\alpha+2\bar{\alpha}t_2-\frac{(\alpha+\bar{\alpha}
t_2)t_2\phi''(1-t_2)}{\phi'(1-t_2)}\right].
\end{eqnarray*}It is easy to show that the derivatives of t 1 and t 2 with respect to x are
 \begin{eqnarray*}
t_1'=\frac{-\lambda^k t_1 k x^{k-1}}{1-\bar{\alpha}e^{-\lambda^k x^k}}=\frac{-\lambda^k k x^{k-1}}{\alpha}(\alpha+\bar{\alpha}t_1)t_1=\frac{k(\alpha+\bar{\alpha}t_1)t_1}{\alpha x}\log\left(\frac{t_1}{\alpha+\bar{\alpha}t_1}\right),
\end{eqnarray*}
\begin{eqnarray*}
t_1'=\frac{-\lambda^k t_1 k x^{k-1}}{1-\bar{\alpha}e^{-\lambda^k x^k}}=\frac{-\lambda^k k x^{k-1}}{\alpha}(\alpha+\bar{\alpha}t_1)t_1=\frac{k(\alpha+\bar{\alpha}t_1)t_1}{\alpha x}\log\left(\frac{t_1}{\alpha+\bar{\alpha}t_1}\right),
\end{eqnarray*} \begin{eqnarray*}
t_2'=\frac{-(1-\lambda)^k k x^{k-1} t_2}{1-\bar{\alpha}e^{(1-\lambda)^k x^k}}=\frac{-(1-\lambda)^k k x^{k-1}}{\alpha}(\alpha+\bar{\alpha}t_2)t_2=\frac{k(\alpha+\bar{\alpha}t_2)t_2}{\alpha x}\log\left(\frac{t_2}{\alpha+\bar{\alpha}t_2}\right).
\end{eqnarray*}
\begin{eqnarray*}
t_2'=\frac{-(1-\lambda)^k k x^{k-1} t_2}{1-\bar{\alpha}e^{(1-\lambda)^k x^k}}=\frac{-(1-\lambda)^k k x^{k-1}}{\alpha}(\alpha+\bar{\alpha}t_2)t_2=\frac{k(\alpha+\bar{\alpha}t_2)t_2}{\alpha x}\log\left(\frac{t_2}{\alpha+\bar{\alpha}t_2}\right).
\end{eqnarray*}Hence, we get
 \begin{eqnarray*}
\Delta'(x)&\stackrel{sgn}{=}& k(\alpha+\bar{\alpha}t_1)\log\left(\frac{t_1}{\alpha+\bar{\alpha}t_1}\right)\left[\alpha+2\bar{\alpha}t_1-\frac{(\alpha+\bar{\alpha}
t_1)t_1\phi''(1-t_1)}{\phi'(1-t_1)}\right]\\
&&-k(\alpha+\bar{\alpha}t_2)\log\left(\frac{t_2}{\alpha+\bar{\alpha}t_2}\right)\left[\alpha+2\bar{\alpha}t_2-\frac{(\alpha+\bar{\alpha}
t_2)t_2\phi''(1-t_2)}{\phi'(1-t_2)}\right].
\end{eqnarray*}
\begin{eqnarray*}
\Delta'(x)&\stackrel{sgn}{=}& k(\alpha+\bar{\alpha}t_1)\log\left(\frac{t_1}{\alpha+\bar{\alpha}t_1}\right)\left[\alpha+2\bar{\alpha}t_1-\frac{(\alpha+\bar{\alpha}
t_1)t_1\phi''(1-t_1)}{\phi'(1-t_1)}\right]\\
&&-k(\alpha+\bar{\alpha}t_2)\log\left(\frac{t_2}{\alpha+\bar{\alpha}t_2}\right)\left[\alpha+2\bar{\alpha}t_2-\frac{(\alpha+\bar{\alpha}
t_2)t_2\phi''(1-t_2)}{\phi'(1-t_2)}\right].
\end{eqnarray*} As  $t_1 \lt t_2$,
$t_1 \lt t_2$,  $\Delta' \lt 0$ if and only if
$\Delta' \lt 0$ if and only if
 \begin{equation*}(\alpha+\bar{\alpha}t)\log\left(\frac{t}{\alpha+\bar{\alpha}t}\right)\left[\alpha+2\bar{\alpha}t-\frac{(\alpha+\bar{\alpha}
t)t\phi''(1-t)}{\phi'(1-t)}\right]\end{equation*}
\begin{equation*}(\alpha+\bar{\alpha}t)\log\left(\frac{t}{\alpha+\bar{\alpha}t}\right)\left[\alpha+2\bar{\alpha}t-\frac{(\alpha+\bar{\alpha}
t)t\phi''(1-t)}{\phi'(1-t)}\right]\end{equation*} is increasing in  $t\in[0,1]$.
$t\in[0,1]$.
 Case (ii).  $\lambda_1+\lambda_2\neq \mu_1+\mu_2.$ In this case, we can note that
$\lambda_1+\lambda_2\neq \mu_1+\mu_2.$ In this case, we can note that  $\lambda_1+\lambda_2=k(\mu_1+\mu_2)$, where k is a scalar. We then have
$\lambda_1+\lambda_2=k(\mu_1+\mu_2)$, where k is a scalar. We then have  $(k\mu_1,k\mu_2)\stackrel{m}{\preceq}(\lambda_1 , \lambda_2)$. Let
$(k\mu_1,k\mu_2)\stackrel{m}{\preceq}(\lambda_1 , \lambda_2)$. Let  $W_{1:n}$ be the lifetime of a series system having n dependent extended exponentially distributed components whose lifetimes have an Archimedean copula with generator ψ, where
$W_{1:n}$ be the lifetime of a series system having n dependent extended exponentially distributed components whose lifetimes have an Archimedean copula with generator ψ, where  $W_i \sim EW(\alpha, k,\mu_1)$ (
$W_i \sim EW(\alpha, k,\mu_1)$ ( $i=1,\ldots, p$) and
$i=1,\ldots, p$) and  $W_{j} \sim EW(\alpha,k,\mu_2)$ (
$W_{j} \sim EW(\alpha,k,\mu_2)$ ( $\,j=p+1,\ldots,n$). From the result in Case (i), we then have
$\,j=p+1,\ldots,n$). From the result in Case (i), we then have  $ W_{n:n} \le_{*} X_{n:n}$. But, since star order is scale invariant, it then follows that
$ W_{n:n} \le_{*} X_{n:n}$. But, since star order is scale invariant, it then follows that  $ Y_{n:n} \le_{*} X_{n:n}$.
$ Y_{n:n} \le_{*} X_{n:n}$.
Theorem 3.10. Let  $X_i \sim EW (\alpha,\lambda_1,k)$
$X_i \sim EW (\alpha,\lambda_1,k)$  $(i=1,\ldots,p)$ and
$(i=1,\ldots,p)$ and  $X_j \sim EW (\alpha,\lambda_2,k)$
$X_j \sim EW (\alpha,\lambda_2,k)$  $(j=p+1,\ldots,n)$, and
$(j=p+1,\ldots,n)$, and  $Y_i \sim EW (\alpha,\mu_1,k)$
$Y_i \sim EW (\alpha,\mu_1,k)$  $(i=1,\ldots,p)$ and
$(i=1,\ldots,p)$ and  $Y_j \sim EW (\alpha,\mu_2,k)$
$Y_j \sim EW (\alpha,\mu_2,k)$  $(\,j=p+1,\ldots,n)$ be variables with a common Archimedean survival copula having generator ψ. Then, if
$(\,j=p+1,\ldots,n)$ be variables with a common Archimedean survival copula having generator ψ. Then, if
 \begin{equation*}(\alpha+\bar{\alpha}t)\log\left(\frac{t}{\alpha+\bar{\alpha}t}\right)\left((\alpha +2\bar{\alpha}t)+(\alpha t+\bar{\alpha}t^2)\frac{\phi''(t)}{\phi'(t)}\right)\end{equation*}
\begin{equation*}(\alpha+\bar{\alpha}t)\log\left(\frac{t}{\alpha+\bar{\alpha}t}\right)\left((\alpha +2\bar{\alpha}t)+(\alpha t+\bar{\alpha}t^2)\frac{\phi''(t)}{\phi'(t)}\right)\end{equation*} is decreasing with respect to  $t \in [0,1]$ and
$t \in [0,1]$ and  $0\le k\leq 1$, we have
$0\le k\leq 1$, we have
 \begin{equation*}
(\lambda_1-\lambda_2)(\mu_1-\mu_2) \ge 0\,\,\,\,\,and\,\,\,\,\,\,
\frac{\lambda_{2:2}}{\lambda_{1:2}} \ge \frac{\mu_{2;2}}{\mu_{1:2}}
\Longrightarrow
Y_{1:n} \le_{*} X_{1:n},
\end{equation*}
\begin{equation*}
(\lambda_1-\lambda_2)(\mu_1-\mu_2) \ge 0\,\,\,\,\,and\,\,\,\,\,\,
\frac{\lambda_{2:2}}{\lambda_{1:2}} \ge \frac{\mu_{2;2}}{\mu_{1:2}}
\Longrightarrow
Y_{1:n} \le_{*} X_{1:n},
\end{equation*} where  $p+q=n$.
$p+q=n$.
Proof. Assume that  $(\lambda_1-\lambda_2)(\mu_1-\mu_2) \ge 0$. Now, without loss of generality, let us assume that
$(\lambda_1-\lambda_2)(\mu_1-\mu_2) \ge 0$. Now, without loss of generality, let us assume that  $\lambda_1\le\lambda_2$ and
$\lambda_1\le\lambda_2$ and  $\mu_1\le\mu_2$. The distribution functions of
$\mu_1\le\mu_2$. The distribution functions of  $X_{1:n}$ and
$X_{1:n}$ and  $Y_{1:n}$ are
$Y_{1:n}$ are
 \begin{equation*}
F_{X_{1:n}}(x)=1-\psi\left[p\phi\left(\frac{\alpha e^{-(\lambda_1 x)^k}}{1-\bar{\alpha} e^{-(\lambda_1 x)^k}}\right)+q\phi\left(\frac{\alpha e^{- (\lambda_2 x)^k}}{1-\bar{\alpha}e^{- (\lambda_2 x)^k}} \right)\right],\quad x\in (0,\infty),
\end{equation*}
\begin{equation*}
F_{X_{1:n}}(x)=1-\psi\left[p\phi\left(\frac{\alpha e^{-(\lambda_1 x)^k}}{1-\bar{\alpha} e^{-(\lambda_1 x)^k}}\right)+q\phi\left(\frac{\alpha e^{- (\lambda_2 x)^k}}{1-\bar{\alpha}e^{- (\lambda_2 x)^k}} \right)\right],\quad x\in (0,\infty),
\end{equation*} and the distribution function of  $Y_{1:n}$ can be obtained simply upon replacing
$Y_{1:n}$ can be obtained simply upon replacing  $(\lambda_{1},\lambda_{2})$ by
$(\lambda_{1},\lambda_{2})$ by  $(\mu_{1},\mu_{2}),$ where
$(\mu_{1},\mu_{2}),$ where  $q=n-p$. In this case, the proof can be completed by considering the following two cases.
$q=n-p$. In this case, the proof can be completed by considering the following two cases.
 Case (i):  $\beta_1+\beta_2=\mu_1+\mu_2.$ For convenience, let us assume that
$\beta_1+\beta_2=\mu_1+\mu_2.$ For convenience, let us assume that  $\beta_1+\beta_2=\mu_1+\mu_2=1$. Set
$\beta_1+\beta_2=\mu_1+\mu_2=1$. Set  $\lambda_1=\lambda$,
$\lambda_1=\lambda$,  $\mu_1=\mu$,
$\mu_1=\mu$,  $\lambda_2=1-\lambda$ and
$\lambda_2=1-\lambda$ and  $\mu_1=1-\mu$. Then, under this setting, the distribution functions of
$\mu_1=1-\mu$. Then, under this setting, the distribution functions of  $X_{1:n}$ and
$X_{1:n}$ and  $Y_{1:n}$ are
$Y_{1:n}$ are
 \begin{equation*}
F_{\lambda}(x)=1-\psi\left[p\phi\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha} e^{-\lambda^k x^k}}\right)+q\phi\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}} \right)\right],\quad x\in (0,\infty),
\end{equation*}
\begin{equation*}
F_{\lambda}(x)=1-\psi\left[p\phi\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha} e^{-\lambda^k x^k}}\right)+q\phi\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}} \right)\right],\quad x\in (0,\infty),
\end{equation*}and
 \begin{eqnarray*}
F_{\mu}(x)=1-\psi\left[p\phi\left(\frac{\alpha e^{-\mu^k x^k}}{1-\bar{\alpha} e^{-\mu^k x^k}}\right)+q\phi\left(\frac{\alpha e^{-(1-\mu)^k x^k}}{1-\bar{\alpha}e^{-(1-\mu)^k x^k}} \right)\right],\quad x\in (0,\infty),
\end{eqnarray*}
\begin{eqnarray*}
F_{\mu}(x)=1-\psi\left[p\phi\left(\frac{\alpha e^{-\mu^k x^k}}{1-\bar{\alpha} e^{-\mu^k x^k}}\right)+q\phi\left(\frac{\alpha e^{-(1-\mu)^k x^k}}{1-\bar{\alpha}e^{-(1-\mu)^k x^k}} \right)\right],\quad x\in (0,\infty),
\end{eqnarray*} respectively. Now, to obtain the required result, it is sufficient to show that  $\frac{F'_\lambda(x)}{x f_\lambda (x)}$ decreasing in
$\frac{F'_\lambda(x)}{x f_\lambda (x)}$ decreasing in  $x \in(0,\infty)$ for
$x \in(0,\infty)$ for  $\alpha \in[0,1/2)$. The derivative of Fλ with respect to λ is
$\alpha \in[0,1/2)$. The derivative of Fλ with respect to λ is
 \begin{eqnarray*}
F'_\lambda(x)&=&-\psi'\left[p\phi\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}
\right)+q\phi\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha} e^{-(1-\lambda)^k x^k}}\right)\right]\\
&&\times \left[p\frac{-\alpha x^k e^{-\lambda^k x^k}}{(1-\bar{\alpha} e^{-\lambda^k x^k})^2}
\phi'\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}} \right)k\lambda^{k-1}+q\frac{\alpha xe^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha}e^{-(1-\lambda)^k x^k})^2}\right.\\
&&\left.\phi'\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}}\right)k(1-\lambda)^{k-1}\right].
\end{eqnarray*}
\begin{eqnarray*}
F'_\lambda(x)&=&-\psi'\left[p\phi\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}
\right)+q\phi\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha} e^{-(1-\lambda)^k x^k}}\right)\right]\\
&&\times \left[p\frac{-\alpha x^k e^{-\lambda^k x^k}}{(1-\bar{\alpha} e^{-\lambda^k x^k})^2}
\phi'\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}} \right)k\lambda^{k-1}+q\frac{\alpha xe^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha}e^{-(1-\lambda)^k x^k})^2}\right.\\
&&\left.\phi'\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}}\right)k(1-\lambda)^{k-1}\right].
\end{eqnarray*}On the other hand, the density function corresponding to Fλ has the form
 \begin{eqnarray*}
f_\lambda(x)&=&-\psi'\left[p\phi\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}
\right)+q\phi\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha} e^{-(1-\lambda)^k x^k}}\right)\right]\\
&&\times \left[p\frac{-\alpha \lambda^k e^{-\lambda^k x^k}}{(1-\bar{\alpha} e^{-\lambda^k x^k})^2}
\phi'\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}} \right)-q\frac{(1-\lambda)^k \alpha e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha}e^{-(1-\lambda)^k x^k})^2}\phi'\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}}\right)\right]kx^{k-1}.
\end{eqnarray*}
\begin{eqnarray*}
f_\lambda(x)&=&-\psi'\left[p\phi\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}
\right)+q\phi\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha} e^{-(1-\lambda)^k x^k}}\right)\right]\\
&&\times \left[p\frac{-\alpha \lambda^k e^{-\lambda^k x^k}}{(1-\bar{\alpha} e^{-\lambda^k x^k})^2}
\phi'\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}} \right)-q\frac{(1-\lambda)^k \alpha e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha}e^{-(1-\lambda)^k x^k})^2}\phi'\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}}\right)\right]kx^{k-1}.
\end{eqnarray*}So, we have
 \begin{eqnarray*}
\frac{F'_\lambda(x)}{xf_\lambda(x)}&=& \frac{p\frac{-\alpha x^k e^{-\lambda^k x^k}}{(1-\bar{\alpha} e^{-\lambda^k x^k})^2}
\phi'\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}} \right)\lambda^{k-1}+q\frac{\alpha x^k e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha}e^{-(1-\lambda)^k x^k})^2}\phi'\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}}\right)(1-\lambda)^{k-1}}{p\frac{-\alpha x^k\lambda^k e^{-\lambda^k x^k}}{(1-\bar{\alpha} e^{-\lambda^k x^k})^2}
\phi'\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}} \right)-q\frac{(1-\lambda)^k x^k \alpha e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha}e^{-(1-\lambda)^k x^k})^2}\phi'\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}}\right)}\\
&=&\left(\lambda+\frac{\frac{-q\alpha x^k e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha} e^{-(1-\lambda)^k x^k})^2}\phi'\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}}\right)(1-\lambda)^{k-1}}{-p\frac{\alpha x^k e^{-\lambda^k x^k}}{(1-\bar{\alpha} e^{-\lambda^k x^k})^2}
\phi'\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}} \right)\lambda^{k-1}+q\frac{\alpha x^k e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha}e^{-(1-\lambda)^k x^k})^2}\phi'\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda) x}}\right)(1-\lambda)^{k-1}} \right)^{-1}\\
&=&\left(\lambda+\left[\frac{p\frac{e^{-\lambda^k x^k}}{(1-\bar{\alpha}e^{-\lambda^k x^k})^2}\phi'\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}\right)\lambda^{k-1}}{q\frac{e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha} e^{-(1-\lambda)^k x^k})^2}\phi'\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}}\right)(1-\lambda)^{k-1}}-1 \right]^{-1}\right)^{-1}.
\end{eqnarray*}
\begin{eqnarray*}
\frac{F'_\lambda(x)}{xf_\lambda(x)}&=& \frac{p\frac{-\alpha x^k e^{-\lambda^k x^k}}{(1-\bar{\alpha} e^{-\lambda^k x^k})^2}
\phi'\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}} \right)\lambda^{k-1}+q\frac{\alpha x^k e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha}e^{-(1-\lambda)^k x^k})^2}\phi'\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}}\right)(1-\lambda)^{k-1}}{p\frac{-\alpha x^k\lambda^k e^{-\lambda^k x^k}}{(1-\bar{\alpha} e^{-\lambda^k x^k})^2}
\phi'\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}} \right)-q\frac{(1-\lambda)^k x^k \alpha e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha}e^{-(1-\lambda)^k x^k})^2}\phi'\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}}\right)}\\
&=&\left(\lambda+\frac{\frac{-q\alpha x^k e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha} e^{-(1-\lambda)^k x^k})^2}\phi'\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}}\right)(1-\lambda)^{k-1}}{-p\frac{\alpha x^k e^{-\lambda^k x^k}}{(1-\bar{\alpha} e^{-\lambda^k x^k})^2}
\phi'\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}} \right)\lambda^{k-1}+q\frac{\alpha x^k e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha}e^{-(1-\lambda)^k x^k})^2}\phi'\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda) x}}\right)(1-\lambda)^{k-1}} \right)^{-1}\\
&=&\left(\lambda+\left[\frac{p\frac{e^{-\lambda^k x^k}}{(1-\bar{\alpha}e^{-\lambda^k x^k})^2}\phi'\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}\right)\lambda^{k-1}}{q\frac{e^{-(1-\lambda)^k x^k}}{(1-\bar{\alpha} e^{-(1-\lambda)^k x^k})^2}\phi'\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}}\right)(1-\lambda)^{k-1}}-1 \right]^{-1}\right)^{-1}.
\end{eqnarray*} We can then conclude that  $\frac{F'_\lambda (x)}{xf_\lambda (x)}$ is decreasing if
$\frac{F'_\lambda (x)}{xf_\lambda (x)}$ is decreasing if  $\Omega(x)=\frac{e^{\lambda^k x^k}\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}\right)^2\phi'\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}\right)}{e^{(1-\lambda)^k x^k}\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha} e^{-(1-\lambda)^k x^k}}\right)^2\phi'\left(\frac{\alpha e^{(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{(1-\lambda)^k x^k}}\right)}$ is decreasing for
$\Omega(x)=\frac{e^{\lambda^k x^k}\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}\right)^2\phi'\left(\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}\right)}{e^{(1-\lambda)^k x^k}\left(\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha} e^{-(1-\lambda)^k x^k}}\right)^2\phi'\left(\frac{\alpha e^{(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{(1-\lambda)^k x^k}}\right)}$ is decreasing for  $x\in (0,\infty)$. Let
$x\in (0,\infty)$. Let  $t_1=\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}$ and
$t_1=\frac{\alpha e^{-\lambda^k x^k}}{1-\bar{\alpha}e^{-\lambda^k x^k}}$ and  $t_2=\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}}$. Then,
$t_2=\frac{\alpha e^{-(1-\lambda)^k x^k}}{1-\bar{\alpha}e^{-(1-\lambda)^k x^k}}$. Then,  $e^{\lambda^k x^k}=\frac{\alpha+\bar{\alpha}t_1}{t_1}$ and
$e^{\lambda^k x^k}=\frac{\alpha+\bar{\alpha}t_1}{t_1}$ and  $e^{(1-\lambda)^k x^k}=\frac{\alpha+\bar{\alpha}t_2}{t_2}$, and so
$e^{(1-\lambda)^k x^k}=\frac{\alpha+\bar{\alpha}t_2}{t_2}$, and so
 \begin{equation*}\Omega(x)=\frac{t_1(\alpha+\bar{\alpha}t_1)\phi'\left(t_1\right)}{t_2(\alpha+\bar{\alpha}t_2)\phi'\left(t_2\right)},\end{equation*}
\begin{equation*}\Omega(x)=\frac{t_1(\alpha+\bar{\alpha}t_1)\phi'\left(t_1\right)}{t_2(\alpha+\bar{\alpha}t_2)\phi'\left(t_2\right)},\end{equation*}whose derivative with respect to x is
 \begin{eqnarray*}
\Omega'(x)&=&\left(\frac{t_1(\alpha+\bar{\alpha}t_1)\phi'\left(t_1\right)}{t_2(\alpha+\bar{\alpha}t_2)\phi'\left(t_2\right)}\right)'\\
&\stackrel{sgn}{=}&\left((\alpha t_1'+2\bar{\alpha}t_1t_1')\phi'(t_1)+(\alpha t_1+\bar{\alpha}t_1^2)t_1'\phi''(t_1)\right)\times (\alpha t_2+\bar{\alpha}t_2^2)\phi'(t_2)\\
&&-\left((\alpha t_2'+2\bar{\alpha}t_2t_2')\phi'(t_2)+(\alpha t_2+\bar{\alpha}t_2^2)t_2'\phi''(t_2)\right)\times (\alpha t_1+\bar{\alpha}t_1^2)\phi'(t_1)\\
&\stackrel{sgn}{=}&\frac{t_1'}{t_1}\left((\alpha +2\bar{\alpha}t_1)+(\alpha t_1+\bar{\alpha}t_1^2)\frac{\phi''(t_1)}{\phi'(t_1)}\right)-\frac{t_2'}{t_2}\left((\alpha +2\bar{\alpha}t_2)+(\alpha t_2+\bar{\alpha}t_2^2)\frac{\phi''(t_2)}{\phi'(t_2)}\right).
\end{eqnarray*}
\begin{eqnarray*}
\Omega'(x)&=&\left(\frac{t_1(\alpha+\bar{\alpha}t_1)\phi'\left(t_1\right)}{t_2(\alpha+\bar{\alpha}t_2)\phi'\left(t_2\right)}\right)'\\
&\stackrel{sgn}{=}&\left((\alpha t_1'+2\bar{\alpha}t_1t_1')\phi'(t_1)+(\alpha t_1+\bar{\alpha}t_1^2)t_1'\phi''(t_1)\right)\times (\alpha t_2+\bar{\alpha}t_2^2)\phi'(t_2)\\
&&-\left((\alpha t_2'+2\bar{\alpha}t_2t_2')\phi'(t_2)+(\alpha t_2+\bar{\alpha}t_2^2)t_2'\phi''(t_2)\right)\times (\alpha t_1+\bar{\alpha}t_1^2)\phi'(t_1)\\
&\stackrel{sgn}{=}&\frac{t_1'}{t_1}\left((\alpha +2\bar{\alpha}t_1)+(\alpha t_1+\bar{\alpha}t_1^2)\frac{\phi''(t_1)}{\phi'(t_1)}\right)-\frac{t_2'}{t_2}\left((\alpha +2\bar{\alpha}t_2)+(\alpha t_2+\bar{\alpha}t_2^2)\frac{\phi''(t_2)}{\phi'(t_2)}\right).
\end{eqnarray*} It is easy to show that  $t_1'=\frac{(\alpha+\bar{\alpha}t_1)t_1}{\alpha x}\log\left(\frac{t_1}{\alpha+\bar{\alpha}t_1}\right),\qquad t_2'=\frac{(\alpha+\bar{\alpha}t_2)t_2}{\alpha x}\log\left(\frac{t_2}{\alpha+\bar{\alpha}t_2}\right).$ Hence, we have
$t_1'=\frac{(\alpha+\bar{\alpha}t_1)t_1}{\alpha x}\log\left(\frac{t_1}{\alpha+\bar{\alpha}t_1}\right),\qquad t_2'=\frac{(\alpha+\bar{\alpha}t_2)t_2}{\alpha x}\log\left(\frac{t_2}{\alpha+\bar{\alpha}t_2}\right).$ Hence, we have
 \begin{eqnarray*}
\Omega'(x)&\stackrel{sgn}{=}&(\alpha+\bar{\alpha}t_1)\log\left(\frac{t_1}{\alpha+\bar{\alpha}t_1}\right)\left((\alpha +2\bar{\alpha}t_1)+(\alpha t_1+\bar{\alpha}t_1^2)\frac{\phi''(t_1)}{\phi'(t_1)}\right)\\
&-&(\alpha+\bar{\alpha}t_2)\log\left(\frac{t_2}{\alpha+\bar{\alpha}t_2}\right)\left((\alpha +2\bar{\alpha}t_2)+(\alpha t_2+\bar{\alpha}t_2^2)\frac{\phi''(t_2)}{\phi'(t_2)}\right).
\end{eqnarray*}
\begin{eqnarray*}
\Omega'(x)&\stackrel{sgn}{=}&(\alpha+\bar{\alpha}t_1)\log\left(\frac{t_1}{\alpha+\bar{\alpha}t_1}\right)\left((\alpha +2\bar{\alpha}t_1)+(\alpha t_1+\bar{\alpha}t_1^2)\frac{\phi''(t_1)}{\phi'(t_1)}\right)\\
&-&(\alpha+\bar{\alpha}t_2)\log\left(\frac{t_2}{\alpha+\bar{\alpha}t_2}\right)\left((\alpha +2\bar{\alpha}t_2)+(\alpha t_2+\bar{\alpha}t_2^2)\frac{\phi''(t_2)}{\phi'(t_2)}\right).
\end{eqnarray*} Now, as  $t_2 \lt t_1$,
$t_2 \lt t_1$,  $\Omega(x)$ is decreasing if
$\Omega(x)$ is decreasing if
 \begin{equation*}(\alpha+\bar{\alpha}t)\log\left(\frac{t}{\alpha+\bar{\alpha}t}\right)\left((\alpha +2\bar{\alpha}t)+(\alpha t+\bar{\alpha}t^2)\frac{\phi''(t)}{\phi'(t)}\right)\end{equation*}
\begin{equation*}(\alpha+\bar{\alpha}t)\log\left(\frac{t}{\alpha+\bar{\alpha}t}\right)\left((\alpha +2\bar{\alpha}t)+(\alpha t+\bar{\alpha}t^2)\frac{\phi''(t)}{\phi'(t)}\right)\end{equation*} is decreasing in  $t\in[0,1]$, as required.
$t\in[0,1]$, as required.
 Case (ii).  $\lambda_1+\lambda_2\neq \mu_1+\mu_2.$ In this case, we can note that
$\lambda_1+\lambda_2\neq \mu_1+\mu_2.$ In this case, we can note that  $\lambda_1+\lambda_2=k(\mu_1+\mu_2)$, where k is a scalar. We then have
$\lambda_1+\lambda_2=k(\mu_1+\mu_2)$, where k is a scalar. We then have  $(k\mu_1,k\mu_2)\stackrel{m}{\preceq}(\lambda_1 , \lambda_2)$. Let
$(k\mu_1,k\mu_2)\stackrel{m}{\preceq}(\lambda_1 , \lambda_2)$. Let  $W_{1:n}$ be the lifetime of a series system having n dependent extended exponentially distributed components whose lifetimes have an Archimedean copula with generator ψ, where
$W_{1:n}$ be the lifetime of a series system having n dependent extended exponentially distributed components whose lifetimes have an Archimedean copula with generator ψ, where  $W_i \sim EW(\alpha, k\mu_1)$ (
$W_i \sim EW(\alpha, k\mu_1)$ ( $i=1,\ldots, p$) and
$i=1,\ldots, p$) and  $W_{j} \sim EW(\alpha,k\mu_2)$ (
$W_{j} \sim EW(\alpha,k\mu_2)$ ( $\,j=p+1,\ldots,n$). From the result in Case (i), we then have
$\,j=p+1,\ldots,n$). From the result in Case (i), we then have  $ W_{1:n} \le_{*} X_{1:n}$. But, since star order is scale invariant, it then follows that
$ W_{1:n} \le_{*} X_{1:n}$. But, since star order is scale invariant, it then follows that  $ Y_{1:n} \le_{*} X_{1:n}$.
$ Y_{1:n} \le_{*} X_{1:n}$.
 It is important to mention that  $X\leq_{*}Y$ implies
$X\leq_{*}Y$ implies  $X\leq_{Loenz}Y.$ Therefore, from Theorems 3.9 and 3.10, we readily obtain the following two corollaries.
$X\leq_{Loenz}Y.$ Therefore, from Theorems 3.9 and 3.10, we readily obtain the following two corollaries.
Corollary 3.1. Let  $X_i \sim EW (\alpha,\lambda_1,k)$
$X_i \sim EW (\alpha,\lambda_1,k)$  $(i=1,\ldots,p)$ and
$(i=1,\ldots,p)$ and  $X_j \sim EW (\alpha,\lambda_2,k)$
$X_j \sim EW (\alpha,\lambda_2,k)$  $(j=p+1,\ldots,n)$, and
$(j=p+1,\ldots,n)$, and  $Y_i \sim EW (\alpha,\mu_1,k)$
$Y_i \sim EW (\alpha,\mu_1,k)$  $(i=1,\ldots,p)$ and
$(i=1,\ldots,p)$ and  $Y_j \sim EW (\alpha,\mu_2,k)$
$Y_j \sim EW (\alpha,\mu_2,k)$  $(j=p+1,\ldots,n)$ be with a common Archimedean survival copula having generator ψ. Then, if
$(j=p+1,\ldots,n)$ be with a common Archimedean survival copula having generator ψ. Then, if
 \begin{equation*}(\alpha+\bar{\alpha}t)\log\left(\frac{t}{\alpha+\bar{\alpha}t}\right)\left((\alpha +2\bar{\alpha}t)+(\alpha t+\bar{\alpha}t^2)\frac{\phi''(t)}{\phi'(t)}\right)\end{equation*}
\begin{equation*}(\alpha+\bar{\alpha}t)\log\left(\frac{t}{\alpha+\bar{\alpha}t}\right)\left((\alpha +2\bar{\alpha}t)+(\alpha t+\bar{\alpha}t^2)\frac{\phi''(t)}{\phi'(t)}\right)\end{equation*} is decreasing with respect to  $t \in [0,1]$ and
$t \in [0,1]$ and  $0\le k\leq 1$, we have
$0\le k\leq 1$, we have
 \begin{equation*}
(\lambda_1-\lambda_2)(\mu_1-\mu_2) \ge 0\,\,\,\,\,and\,\,\,\,\,\,
\frac{\lambda_{2:2}}{\lambda_{1:2}} \ge \frac{\mu_{2;2}}{\mu_{1:2}}
\Longrightarrow
Y_{1:n} \le_{Lorenz} X_{1:n},
\end{equation*}
\begin{equation*}
(\lambda_1-\lambda_2)(\mu_1-\mu_2) \ge 0\,\,\,\,\,and\,\,\,\,\,\,
\frac{\lambda_{2:2}}{\lambda_{1:2}} \ge \frac{\mu_{2;2}}{\mu_{1:2}}
\Longrightarrow
Y_{1:n} \le_{Lorenz} X_{1:n},
\end{equation*} where  $p+q=n$.
$p+q=n$.
Corollary 3.2. Let  $X_i \sim EW (\alpha,\lambda_1,k)$
$X_i \sim EW (\alpha,\lambda_1,k)$  $(i=1,\ldots,p)$ and
$(i=1,\ldots,p)$ and  $X_j \sim EW (\alpha,\lambda_2,k)$
$X_j \sim EW (\alpha,\lambda_2,k)$  $(j=p+1,\ldots,n)$, and
$(j=p+1,\ldots,n)$, and  $Y_i \sim EW (\alpha,\mu_1,k)$
$Y_i \sim EW (\alpha,\mu_1,k)$  $(i=1,\ldots,p)$ and
$(i=1,\ldots,p)$ and  $Y_j \sim EW (\alpha,\mu_2,k )$
$Y_j \sim EW (\alpha,\mu_2,k )$  $(j=p+1,\ldots,n)$ be with a common Archimedean copula having generator ψ. Then, if
$(j=p+1,\ldots,n)$ be with a common Archimedean copula having generator ψ. Then, if
 \begin{equation*}
(\alpha+\bar{\alpha}t)\log\left(\frac{t}{\alpha+\bar{\alpha}t}\right)\left[\alpha+2\bar{\alpha}t-\frac{(\alpha+\bar{\alpha}
t)t\phi''(1-t)}{\phi'(1-t)}\right]
\end{equation*}
\begin{equation*}
(\alpha+\bar{\alpha}t)\log\left(\frac{t}{\alpha+\bar{\alpha}t}\right)\left[\alpha+2\bar{\alpha}t-\frac{(\alpha+\bar{\alpha}
t)t\phi''(1-t)}{\phi'(1-t)}\right]
\end{equation*} is increasing with respect to  $t \in [0,1]$ and
$t \in [0,1]$ and  $0\le k\leq 1$, we have
$0\le k\leq 1$, we have
 \begin{eqnarray*}
(\lambda_1-\lambda_2)(\mu_1-\mu_2) \ge 0\,\,\,\,\,and\,\,\,\,\,\,
\frac{\lambda_{2:2}}{\lambda_{1:2}} \ge \frac{\mu_{2;2}}{\mu_{1:2}}
\Longrightarrow
Y_{n:n} \le_{Lorenz} X_{n:n},
\end{eqnarray*}
\begin{eqnarray*}
(\lambda_1-\lambda_2)(\mu_1-\mu_2) \ge 0\,\,\,\,\,and\,\,\,\,\,\,
\frac{\lambda_{2:2}}{\lambda_{1:2}} \ge \frac{\mu_{2;2}}{\mu_{1:2}}
\Longrightarrow
Y_{n:n} \le_{Lorenz} X_{n:n},
\end{eqnarray*} where  $p+q=n$.
$p+q=n$.
 We now present some conditions for comparing the smallest order statistics in terms of dispersive order. In the following theorem, we use the notation  $I_+=\{(\lambda_{1},\ldots,\lambda_{n}):0 \lt \lambda_{1}\leq\ldots\leq\lambda_{n}\}$ and
$I_+=\{(\lambda_{1},\ldots,\lambda_{n}):0 \lt \lambda_{1}\leq\ldots\leq\lambda_{n}\}$ and  $D_+=\{(\lambda_{1},\ldots,\lambda_{n}):\lambda_{1}\geq\ldots\geq\lambda_{n} \gt 0\} .$
$D_+=\{(\lambda_{1},\ldots,\lambda_{n}):\lambda_{1}\geq\ldots\geq\lambda_{n} \gt 0\} .$
Theorem 3.11. Let  $X_i\sim EW(\alpha, \lambda_i,k)$
$X_i\sim EW(\alpha, \lambda_i,k)$  $(i=1,\ldots,n)$ and
$(i=1,\ldots,n)$ and  $Y_i\sim EW(\alpha, \lambda ,k)$
$Y_i\sim EW(\alpha, \lambda ,k)$  $(i=1,\ldots,n)$ and the associated Archimedean survival copula for both be with generator
$(i=1,\ldots,n)$ and the associated Archimedean survival copula for both be with generator  $\psi,$
$\psi,$  $0 \leq k \leq 1$ and
$0 \leq k \leq 1$ and  $0 \leq \alpha \leq 1$. Then,
$0 \leq \alpha \leq 1$. Then,  $\lambda \leq (\lambda_1\cdots\lambda_n)^{\frac{1}{n}} $ implies
$\lambda \leq (\lambda_1\cdots\lambda_n)^{\frac{1}{n}} $ implies  $Y_{1:n} \succeq_{disp} X_{1:n}$ if
$Y_{1:n} \succeq_{disp} X_{1:n}$ if  ${\psi}/{\psi^{'}}$ is decreasing and concave.
${\psi}/{\psi^{'}}$ is decreasing and concave.
Proof. First, let us consider the function
 \begin{equation*}F(x)= \frac{1-e^{-x^{k}}}{\left(1-\bar{\alpha} e^{-x^{k}}\right)}.\end{equation*}
\begin{equation*}F(x)= \frac{1-e^{-x^{k}}}{\left(1-\bar{\alpha} e^{-x^{k}}\right)}.\end{equation*}Let us define another function
 \begin{equation*}\bar{h}(x) = \frac{x f(x)}{\bar{F}(x)}. \end{equation*}
\begin{equation*}\bar{h}(x) = \frac{x f(x)}{\bar{F}(x)}. \end{equation*}Then,
 \begin{equation*}
\bar{h}(e^x) = \frac{e^x f(e^x)}{\bar{F}(e^x)} = \frac{ke^{xk}}{\left(1-\bar{\alpha} e^{-e^{xk}}\right)}.
\end{equation*}
\begin{equation*}
\bar{h}(e^x) = \frac{e^x f(e^x)}{\bar{F}(e^x)} = \frac{ke^{xk}}{\left(1-\bar{\alpha} e^{-e^{xk}}\right)}.
\end{equation*} Upon taking derivative of  $\bar{h}(e^x)$ with respect to x, from Lemma 2.4, we get
$\bar{h}(e^x)$ with respect to x, from Lemma 2.4, we get
 \begin{equation}
\frac{\partial \bar{h}(e^x)}{\partial x}= \dfrac{k^2\left(\mathrm{e}^{\mathrm{e}^{kx}}-\bar{\alpha}\mathrm{e}^{kx}-\bar{\alpha}\right)\mathrm{e}^{\mathrm{e}^{kx}+kx}}{\left(\mathrm{e}^{\mathrm{e}^{kx}}-\bar{\alpha}\right)^2} \geq 0.
\end{equation}
\begin{equation}
\frac{\partial \bar{h}(e^x)}{\partial x}= \dfrac{k^2\left(\mathrm{e}^{\mathrm{e}^{kx}}-\bar{\alpha}\mathrm{e}^{kx}-\bar{\alpha}\right)\mathrm{e}^{\mathrm{e}^{kx}+kx}}{\left(\mathrm{e}^{\mathrm{e}^{kx}}-\bar{\alpha}\right)^2} \geq 0.
\end{equation} From (3.11), we have  $\bar{h}(e^x)$ to be increasing in x and so h(x) is increasing in x. Again, the second-order partial derivative of
$\bar{h}(e^x)$ to be increasing in x and so h(x) is increasing in x. Again, the second-order partial derivative of  $\bar{h}(e^x)$ with respect to x is given by
$\bar{h}(e^x)$ with respect to x is given by
 \begin{equation}
\frac{\partial^2 \bar{h}(e^x)}{\partial x^2} = \dfrac{k^3\left(\mathrm{e}^{2\mathrm{e}^{kx}}+\left(\bar{\alpha}\mathrm{e}^{2kx}-3\bar{\alpha}\mathrm{e}^{kx}-2\bar{\alpha}\right)\mathrm{e}^{\mathrm{e}^{kx}}+\bar{\alpha}^2\mathrm{e}^{2kx}+3\bar{\alpha}^2\mathrm{e}^{kx}+\bar{\alpha}^2\right)\mathrm{e}^{\mathrm{e}^{kx}+kx}}{\left(\mathrm{e}^{\mathrm{e}^{kx}}-\bar{\alpha}\right)^3} \geq 0,
\end{equation}
\begin{equation}
\frac{\partial^2 \bar{h}(e^x)}{\partial x^2} = \dfrac{k^3\left(\mathrm{e}^{2\mathrm{e}^{kx}}+\left(\bar{\alpha}\mathrm{e}^{2kx}-3\bar{\alpha}\mathrm{e}^{kx}-2\bar{\alpha}\right)\mathrm{e}^{\mathrm{e}^{kx}}+\bar{\alpha}^2\mathrm{e}^{2kx}+3\bar{\alpha}^2\mathrm{e}^{kx}+\bar{\alpha}^2\right)\mathrm{e}^{\mathrm{e}^{kx}+kx}}{\left(\mathrm{e}^{\mathrm{e}^{kx}}-\bar{\alpha}\right)^3} \geq 0,
\end{equation} and so we see from Lemma 2.4 that  $\bar{h}(e^x)$ is convex for all x. Now,
$\bar{h}(e^x)$ is convex for all x. Now,
 \begin{equation*}
\log{\bar{F}(x)} = \log\left(1-\frac{\left(1-e^{-x^{k}}\right)}{\left(1-\bar{\alpha}e^{-x^{k}}\right)}\right)
\end{equation*}
\begin{equation*}
\log{\bar{F}(x)} = \log\left(1-\frac{\left(1-e^{-x^{k}}\right)}{\left(1-\bar{\alpha}e^{-x^{k}}\right)}\right)
\end{equation*}whose second partial derivative is
 \begin{equation}
\frac{\partial^2 \log{\bar{F}(x)}}{\partial x^2} = -\dfrac{kx^{k-2}\mathrm{e}^{x^k}\left(\left(k-1\right)\mathrm{e}^{x^k}-\bar{\alpha}kx^k-\bar{\alpha}k+\bar{\alpha}\right)}{\left(\mathrm{e}^{x^k}-\bar{\alpha}\right)^2} \geq 0,
\end{equation}
\begin{equation}
\frac{\partial^2 \log{\bar{F}(x)}}{\partial x^2} = -\dfrac{kx^{k-2}\mathrm{e}^{x^k}\left(\left(k-1\right)\mathrm{e}^{x^k}-\bar{\alpha}kx^k-\bar{\alpha}k+\bar{\alpha}\right)}{\left(\mathrm{e}^{x^k}-\bar{\alpha}\right)^2} \geq 0,
\end{equation} where  $0 \leq k \leq 1$ and
$0 \leq k \leq 1$ and  $0 \leq \alpha \leq 1$. Hence, from (3.13), we see that
$0 \leq \alpha \leq 1$. Hence, from (3.13), we see that  $\bar{F}(x)$ is log-convex in
$\bar{F}(x)$ is log-convex in  $x \geq 0$. Under the considered set-up,
$x \geq 0$. Under the considered set-up,  $X_{1:n}$ and
$X_{1:n}$ and  $Y_{1:n}$ have their distribution functions as
$Y_{1:n}$ have their distribution functions as  $F_1(x)=1-\psi(\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x)))$ and
$F_1(x)=1-\psi(\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x)))$ and  $H_1(x)=1-n\psi(n(\bar{F}(\lambda x)))$ for
$H_1(x)=1-n\psi(n(\bar{F}(\lambda x)))$ for  $x\geq 0$, and their density functions as
$x\geq 0$, and their density functions as
 \begin{equation*}
f_1(x)= \psi^{'}\Big(\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x))\Big)\sum_{k=1}^{n} \frac{\lambda_k h(\lambda_k x) \bar{F}(\lambda_k x)}{\psi^{'}(\phi(\bar{F}(\lambda_k x)))}
\end{equation*}
\begin{equation*}
f_1(x)= \psi^{'}\Big(\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x))\Big)\sum_{k=1}^{n} \frac{\lambda_k h(\lambda_k x) \bar{F}(\lambda_k x)}{\psi^{'}(\phi(\bar{F}(\lambda_k x)))}
\end{equation*}and
 \begin{equation}
h_1(x)= \psi^{'}(n\phi(\bar{F}(\lambda x))) \frac{n\lambda h(\lambda x) \bar{F}(\lambda x)}{\psi^{'}(\phi(\bar{F}(\lambda x)))},
\end{equation}
\begin{equation}
h_1(x)= \psi^{'}(n\phi(\bar{F}(\lambda x))) \frac{n\lambda h(\lambda x) \bar{F}(\lambda x)}{\psi^{'}(\phi(\bar{F}(\lambda x)))},
\end{equation} respectively. Now, let us denote  $L_1(x;\lambda)= \bar{F}^{-1}(\psi((1/n)\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x))))$. Then, for
$L_1(x;\lambda)= \bar{F}^{-1}(\psi((1/n)\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x))))$. Then, for  $x\geq0$,
$x\geq0$,  $H_1^{-1}(F_1(x))=(1/\lambda)L_1(x;\lambda)$ and
$H_1^{-1}(F_1(x))=(1/\lambda)L_1(x;\lambda)$ and
 \begin{equation}
h_1(H_1^{-1}(F_1(x)))= \psi^{'}\Big(\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x))\Big)\frac{n\lambda h(L_1(x;\lambda))\phi((1/n)\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x)))}{\psi^{'}\Big((1/n)\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x))\Big)}.
\end{equation}
\begin{equation}
h_1(H_1^{-1}(F_1(x)))= \psi^{'}\Big(\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x))\Big)\frac{n\lambda h(L_1(x;\lambda))\phi((1/n)\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x)))}{\psi^{'}\Big((1/n)\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x))\Big)}.
\end{equation} Again, concavity property of  $\psi/\psi^{'}$ yields
$\psi/\psi^{'}$ yields
 \begin{equation*}
\frac{\psi(\frac{1}{n} \sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x)))}{\psi^{'}(\frac{1}{n} \sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x)))}
\geq \frac{1}{n}\sum_{k=1}^{n}\frac{\psi(\phi(\bar{F}(\lambda_k x)))}{\psi^{'}(\phi(\bar{F}(\lambda_k x)))}.
\end{equation*}
\begin{equation*}
\frac{\psi(\frac{1}{n} \sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x)))}{\psi^{'}(\frac{1}{n} \sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x)))}
\geq \frac{1}{n}\sum_{k=1}^{n}\frac{\psi(\phi(\bar{F}(\lambda_k x)))}{\psi^{'}(\phi(\bar{F}(\lambda_k x)))}.
\end{equation*} As h(x) is increasing and  $\psi/\psi^{'}$ is decreasing,
$\psi/\psi^{'}$ is decreasing,  $\log{\bar{F}(e^x)}$ is concave and
$\log{\bar{F}(e^x)}$ is concave and  $\log{\psi}$ is convex. Now, using the given assumption that
$\log{\psi}$ is convex. Now, using the given assumption that  $\lambda\leq (\lambda_1\cdots\lambda_n)^{\frac{1}{n}}$, and the fact that
$\lambda\leq (\lambda_1\cdots\lambda_n)^{\frac{1}{n}}$, and the fact that  $\log{\bar{F}(x)}\leq 0$ is decreasing, we have
$\log{\bar{F}(x)}\leq 0$ is decreasing, we have
 \begin{eqnarray*}
\log{\bar{F}(\lambda x)}\geq \log{\bar{F}((\Pi_{k=1}^{n}\lambda_k x)^{\frac{1}{n}})}\geq \frac{1}{n}\sum_{k=1}^{n} \log{\bar{F}\left(exp{\left(\frac{1}{n}\sum_{k=1}^{n}\log{(\lambda_k x)}\right)}\right)}.
\end{eqnarray*}
\begin{eqnarray*}
\log{\bar{F}(\lambda x)}\geq \log{\bar{F}((\Pi_{k=1}^{n}\lambda_k x)^{\frac{1}{n}})}\geq \frac{1}{n}\sum_{k=1}^{n} \log{\bar{F}\left(exp{\left(\frac{1}{n}\sum_{k=1}^{n}\log{(\lambda_k x)}\right)}\right)}.
\end{eqnarray*} Observe that  $\log{\bar{F}(e^x)}$ is concave,
$\log{\bar{F}(e^x)}$ is concave,  $\log{\psi}$ is convex, and
$\log{\psi}$ is convex, and  $\boldsymbol{\lambda} \in I_+ \text{or } D_+$. Hence, from Chebychev’s inequality, it follows that
$\boldsymbol{\lambda} \in I_+ \text{or } D_+$. Hence, from Chebychev’s inequality, it follows that
 \begin{align}
\log{\bar{F}(\lambda x)}-\log{\psi\left(\frac{1}{n}\sum_{k=1}^{n}\phi\big(\bar{F}(\lambda_k x)\big)\right)}&\geq
\frac{1}{n}\sum_{k=1}^{n} \log{\bar{F}\left(exp{(\frac{1}{n}\sum_{k=1}^{n}\log{(\lambda_k x)})}\right)}-\log{\psi\left(\frac{1}{n}\sum_{k=1}^{n}\phi\big(\bar{F}(\lambda_k x)\big)\right)}\nonumber\\
&\geq
\frac{1}{n}\sum_{k=1}^{n} \log{\bar{F}(\lambda_k x)}-\frac{1}{n}\sum_{k=1}^{n} \log{\bar{F}(\lambda_k x)}\geq 0.
\end{align}
\begin{align}
\log{\bar{F}(\lambda x)}-\log{\psi\left(\frac{1}{n}\sum_{k=1}^{n}\phi\big(\bar{F}(\lambda_k x)\big)\right)}&\geq
\frac{1}{n}\sum_{k=1}^{n} \log{\bar{F}\left(exp{(\frac{1}{n}\sum_{k=1}^{n}\log{(\lambda_k x)})}\right)}-\log{\psi\left(\frac{1}{n}\sum_{k=1}^{n}\phi\big(\bar{F}(\lambda_k x)\big)\right)}\nonumber\\
&\geq
\frac{1}{n}\sum_{k=1}^{n} \log{\bar{F}(\lambda_k x)}-\frac{1}{n}\sum_{k=1}^{n} \log{\bar{F}(\lambda_k x)}\geq 0.
\end{align} So, from (3.16), we have  $L_1(x;\lambda)\geq \lambda x$. Moreover, we have h(x) to be decreasing as
$L_1(x;\lambda)\geq \lambda x$. Moreover, we have h(x) to be decreasing as  $\bar{h}(x)$ is increasing and so, h(x) is convex. Therefore, using
$\bar{h}(x)$ is increasing and so, h(x) is convex. Therefore, using  $\lambda \leq (\prod_{k=1}^{n}\lambda_k)^{\frac{1}{n}},$ we have
$\lambda \leq (\prod_{k=1}^{n}\lambda_k)^{\frac{1}{n}},$ we have
 \begin{align*}
\lambda h(L_1(x;\lambda))&\leq \frac{1}{x}\lambda x h(\lambda x) \nonumber\\
&\leq \frac{1}{x}\big(\Pi_{k=1}^{n}\lambda_k x\big)^{\frac{1}{n}} h\big(\big(\Pi_{k=1}^{n}\lambda_k x\big)^{\frac{1}{n}}\big)\nonumber\\
&=\frac{1}{x}exp\left(\frac{1}{n}\sum_{k=1}^{n}\log{\lambda_k} x\right)h\left(exp\left(\frac{1}{n}\sum_{k=1}^{n}\log{\lambda_k x}\right)\right).
\end{align*}
\begin{align*}
\lambda h(L_1(x;\lambda))&\leq \frac{1}{x}\lambda x h(\lambda x) \nonumber\\
&\leq \frac{1}{x}\big(\Pi_{k=1}^{n}\lambda_k x\big)^{\frac{1}{n}} h\big(\big(\Pi_{k=1}^{n}\lambda_k x\big)^{\frac{1}{n}}\big)\nonumber\\
&=\frac{1}{x}exp\left(\frac{1}{n}\sum_{k=1}^{n}\log{\lambda_k} x\right)h\left(exp\left(\frac{1}{n}\sum_{k=1}^{n}\log{\lambda_k x}\right)\right).
\end{align*} Once again, by using Chebychev’s inequality, increasing property of  $\bar{h},$ decreasing property of
$\bar{h},$ decreasing property of  ${\psi}/{{\psi}^{'}}$ and
${\psi}/{{\psi}^{'}}$ and  $\boldsymbol{\lambda} \in I_+\text{or } D_+$, we obtain
$\boldsymbol{\lambda} \in I_+\text{or } D_+$, we obtain
 \begin{align}
\frac{1}{n}\sum_{k=1}^{n} \frac{\lambda_k h(\lambda_k x)\bar{F}(\lambda_k x)}{\psi^{'}(\phi(\bar{F}(\lambda_k x)))}&=\frac{1}{n}\sum_{k=1}^{n} \frac{\lambda_k h(\lambda_k x)\psi(\phi(\bar{F}(\lambda_k x)))}{\psi^{'}(\phi(\bar{F}(\lambda_k x)))}\nonumber\\
&\leq \frac{1}{n}\sum_{k=1}^{n} \lambda_k h(\lambda_k x) \frac{1}{n}\sum_{k=1}^{n} \frac{\psi(\phi(\bar{F}(\lambda_k x)))}{\psi^{'}(\phi(\bar{F}(\lambda_k x)))}\nonumber\\
&\leq \frac{1}{n}\sum_{k=1}^{n} \lambda_k h(\lambda_k x) \frac{\psi\left(\frac{1}{n}\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x))\right)}{\psi^{'}\left(\frac{1}{n}\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x))\right)} \nonumber\\
&\leq \lambda L_1(x;\lambda)\frac{\psi(\frac{1}{n}\sum_{k=1}^{n} \phi\left(\bar{F}(\lambda_k x))\right)}{\psi^{'}\left(\frac{1}{n}\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x))\right)}.
\end{align}
\begin{align}
\frac{1}{n}\sum_{k=1}^{n} \frac{\lambda_k h(\lambda_k x)\bar{F}(\lambda_k x)}{\psi^{'}(\phi(\bar{F}(\lambda_k x)))}&=\frac{1}{n}\sum_{k=1}^{n} \frac{\lambda_k h(\lambda_k x)\psi(\phi(\bar{F}(\lambda_k x)))}{\psi^{'}(\phi(\bar{F}(\lambda_k x)))}\nonumber\\
&\leq \frac{1}{n}\sum_{k=1}^{n} \lambda_k h(\lambda_k x) \frac{1}{n}\sum_{k=1}^{n} \frac{\psi(\phi(\bar{F}(\lambda_k x)))}{\psi^{'}(\phi(\bar{F}(\lambda_k x)))}\nonumber\\
&\leq \frac{1}{n}\sum_{k=1}^{n} \lambda_k h(\lambda_k x) \frac{\psi\left(\frac{1}{n}\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x))\right)}{\psi^{'}\left(\frac{1}{n}\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x))\right)} \nonumber\\
&\leq \lambda L_1(x;\lambda)\frac{\psi(\frac{1}{n}\sum_{k=1}^{n} \phi\left(\bar{F}(\lambda_k x))\right)}{\psi^{'}\left(\frac{1}{n}\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x))\right)}.
\end{align} Now, using the inequality in (3.14), (3.15) and (3.17), we obtain, for all  $x\geq 0$,
$x\geq 0$,
 \begin{align*}
& h_1(H_1^{-1}(F_1(x)))-f_1(x)\nonumber\\
&=\frac{1}{n}\sum_{k=1}^{n} \frac{\lambda_k h(\lambda_k x)\bar{F}(\lambda_k x)}{\psi^{'}(\phi(\bar{F}(\lambda_k x)))}- \frac{1}{n}\sum_{k=1}^{n} \lambda_k h(\lambda_k x) \frac{\psi\left(\frac{1}{n}\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x))\right)}{\psi^{'}\left(\frac{1}{n}\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x))\right)}\leq 0,
\end{align*}
\begin{align*}
& h_1(H_1^{-1}(F_1(x)))-f_1(x)\nonumber\\
&=\frac{1}{n}\sum_{k=1}^{n} \frac{\lambda_k h(\lambda_k x)\bar{F}(\lambda_k x)}{\psi^{'}(\phi(\bar{F}(\lambda_k x)))}- \frac{1}{n}\sum_{k=1}^{n} \lambda_k h(\lambda_k x) \frac{\psi\left(\frac{1}{n}\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x))\right)}{\psi^{'}\left(\frac{1}{n}\sum_{k=1}^{n} \phi(\bar{F}(\lambda_k x))\right)}\leq 0,
\end{align*} which yields  $f_1(F_1^{-1}(x))\geq h_1(H_1^{-1}(x)),$ for all
$f_1(F_1^{-1}(x))\geq h_1(H_1^{-1}(x)),$ for all  $x\in(0,1).$ This completes the proof of the theorem.
$x\in(0,1).$ This completes the proof of the theorem.
Remark 3.4. Consider the Clayton copula with generator  $\psi(x)=(1+ax)^{-1/a},$ where
$\psi(x)=(1+ax)^{-1/a},$ where  $a\geq 0,$ and the Independence copula with generator
$a\geq 0,$ and the Independence copula with generator  $\psi(x)=e^{-x},~x \gt 0.$ After some mathematical calculations, we can see that both these generators satisfy all the conditions in Theorem 3.11. Moreover, these generators also satisfy the conditions in Theorems 3.9, 3.10 and Corollaries 3.1, 3.2 for α = 1.
$\psi(x)=e^{-x},~x \gt 0.$ After some mathematical calculations, we can see that both these generators satisfy all the conditions in Theorem 3.11. Moreover, these generators also satisfy the conditions in Theorems 3.9, 3.10 and Corollaries 3.1, 3.2 for α = 1.
Remark 3.5. It is of importance to mention that it may be possible to extend the results established in the previous section to the case when the number of variables in the two sets are random. We are currently looking into this problem with random sample sizes and hope to report the findings in a future paper.
4. Concluding remarks
 The purpose of this article is to establish ordering results between two given sets of dependent variables  $\{X_{1},\ldots,X_{n}\}$ and
$\{X_{1},\ldots,X_{n}\}$ and  $\{Y_{1},\ldots,Y_{n}\},$ wherein the underlying variables are from extended Weibull family of distributions. The random variables are then associated with Archimedean (survival) copulas with different generators. Several conditions are presented for the stochastic comparisons of extremes in the sense of usual stochastic, star, Lorenz, hazard rate, reversed hazard rate and dispersive orders.
$\{Y_{1},\ldots,Y_{n}\},$ wherein the underlying variables are from extended Weibull family of distributions. The random variables are then associated with Archimedean (survival) copulas with different generators. Several conditions are presented for the stochastic comparisons of extremes in the sense of usual stochastic, star, Lorenz, hazard rate, reversed hazard rate and dispersive orders.
Acknowledgments
The author Sangita Das gratefully acknowledges the financial support for this research work under the NPDF, grant No: PDF/2022/00471, SERB, Government of India. Also, she wants to thank the Theoretical Statistics and Mathematics Unit, Indian Statistical Institute, Bangalore for the financial support and the hospitality during her stay. The authors also express their sincere thanks to the Editor and the anonymous reviewers for all their useful comments and suggestions on as earlier version of this manuscript which led to this much improved version.
Conflict of interest
No potential conflict of interest was reported by the authors.
 
 

































































