Hostname: page-component-857557d7f7-ksgrx Total loading time: 0 Render date: 2025-12-11T08:25:15.395Z Has data issue: false hasContentIssue false

Free fermionic probability theory and k-theoretic schubert calculus

Published online by Cambridge University Press:  09 December 2025

Shinsuke Iwao
Affiliation:
Keio University , Japan; E-mail: iwao-s@keio.jp
Kohei Motegi*
Affiliation:
Tokyo University of Marine Science and Technology, Japan
Travis Scrimshaw
Affiliation:
Hokkaido University , Japan; E-mail: tcscrims@gmail.com
*
E-mail: kmoteg0@kaiyodai.ac.jp (Corresponding author)

Abstract

For each of the four particle processes given by Dieker and Warren, we show the n-step transition kernels are given by the (dual) (weak) refined symmetric Grothendieck functions up to a simple overall factor. We do so by encoding the particle dynamics as the basis of free fermions first introduced by the first author, which we translate into deformed Schur operators acting on partitions. We provide a direct combinatorial proof of this relationship in each case, where the defining tableaux naturally describe the particle motions.

Information

Type
Discrete Mathematics
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

1 Introduction

An asymmetric simple exclusion process (ASEP) is a probabilistic model for particles on a lattice, typically one dimensional, domain such that each position can be occupied by at most one particle. As such, it has been used as a simple model for a diverse range of natural processes, such as in transportation through microscopic channels [Reference Chou and LohseCL99], vehicle traffic moving in a single lane [Reference Chowdhury, Santen and SchadschneiderCSS00], or the dynamics of ribosomes along RNA [Reference MacDonald, Gibbs and PipkinMGP68] (the earliest known publication as far as the authors are aware). The study of such particle systems is an active area of research, with some recent mathematical articles being [Reference Ayyer, Goldstein, Lebowitz and SpeerAGLS23, Reference Ayyer, Mandelshtam and MartinAMM23, Reference Ayyer, Mandelshtam and MartinAMM24, Reference Ayyer and NadeauAN22, Reference Borodin and BufetovBB21, Reference Bisi, Liao, Saenz and ZygourasBLSZ23, Reference Corwin, Matveev and PetrovCMP21, Reference Corteel, Mandelshtam and WilliamsCMW22, Reference Cantini and ZahraCZ22, Reference Kim and WilliamsKW23, Reference Petrov and SaenzPS22, Reference Quastel and SarkarQS23].

We will focus on the case when the particles only move in one direction (here, to the right) on a $\mathbb {Z}$ lattice. This is known as a totally asymmetric simple exclusion process (TASEP) on a line. Given that TASEP can be interpreted as a model for electrons moving down a wire, in this introduction we will be representing states using the description of free fermions. The question becomes how to encode the dynamics of the TASEP considered in terms of operators acting on the free fermions.

The versions of TASEP we will focus on are the four variations that were studied by Dieker and Warren [Reference Dieker and WarrenDW08], where the particles will all lie on $\mathbb {Z}$ starting from the step initial condition, where the j-th particle starts at site $-j$ for all $j \geq 1$ , and move in discrete time. (In [Reference Dieker and WarrenDW08], they used a “bosonic” formulation that can easily be translated into the fermionic description we use in the introduction; see Section 2.4 for a precise relationship.) These TASEP variations have been studied before by various authors and sometimes using different models; we refer the reader to [Reference Dieker and WarrenDW08] for further connections and references. The particles will stay in order, so we can identify a partition $\lambda $ with the positions of particles by having the j-th particle be at position $\lambda _j - j$ . All four variations are based on random matrices $[w_{ji}]_{i,j}$ with $w_{ji}$ either being Bernoulli or geometric random variables and taking either a (zero temperature) first or last passage percolation model. Translating this to the motion of the particles, the $w_{ji}$ specifies how many steps the j-th particle wants to move at time i, and the first (resp. last) passage percolation corresponds to the particles either being blocked by smaller particles (resp. pushing the smaller particles). (See Section 2.4 for a precise description.) In order to encode the dynamics using free fermions, we will show the transition probabilities can be described using symmetric functions coming from the K-theory of a classical algebraic variety, the Grassmannian.

In more detail, the Grassmannian $\operatorname {Gr}(k, n)$ is the set of k-dimensional subspaces of $\mathbb {C}^n$ . Next, this has a natural action of the group of invertible upper triangular $n \times n$ matrices B, which acts with finitely many orbits on $\operatorname {Gr}(k, n)$ indexed by partitions $\lambda $ inside a $k \times (n-k)$ rectangle. The closures of these orbits (under the Zariski topology) are known as Schubert varieties, and they give a CW decomposition of $\operatorname {Gr}(k, n)$ . Hence, they give rise to a basis for the cohomology ring $H^{\bullet }(\operatorname {Gr}(k, n), \mathbb {Z})$ , where under Borel’s isomorphism [Reference BorelBor53] the cohomology class indexed by $\lambda $ corresponds to the Schur function $s_{\lambda }({\mathbf {x}})$ . This construction can be extended to the (connective) K-theory ring of $\operatorname {Gr}(k, n)$ by using the Bott–Samelson resolution of Schubert varieties, where now the K-theory class indexed by $\lambda $ corresponds [Reference Lascoux and SchützenbergerLS82, Reference Lascoux and SchützenbergerLS83] to the (symmetricFootnote 1) Grothendieck function $G_{\lambda }({\mathbf {x}}; \beta )$ . When working with symmetric functions, it is natural from representation theory to consider the Schur functions as an orthonormal basis as they are characters of the irreducible representations of the Lie group of all invertible $n \times n$ matrices. Therefore, we can take the dual basis $\{g_{\lambda }({\mathbf {x}}; \beta )\}_{\lambda }$ to $\{G_{\lambda }({\mathbf {x}}; \beta )\}_{\lambda }$ in (a completion of) the ring of symmetric functions over all partitions $\lambda $ . Additionally, there is an algebra involution $\omega $ defined by $\omega s_{\lambda } = s_{\lambda '}$ , where $\lambda '$ is the conjugate partition of $\lambda $ , that defines the “weak” versions $J_{\lambda }({\mathbf {x}}; \beta ) := \omega G_{\lambda }({\mathbf {x}}; \beta )$ and $j_{\lambda }({\mathbf {x}}; \beta ) = \omega g_{\lambda }({\mathbf {x}}; \beta )$ .

These symmetric functions have combinatorial descriptions using variations of the classical semistandard tableaux description of $s_{\lambda }({\mathbf {x}})$ (see, e.g., [Reference StanleySta99]). The Schur decomposition of $G_{\lambda }({\mathbf {x}}; \beta )$ was shown by Lenart [Reference LenartLen00], and expressing $G_{\lambda }({\mathbf {x}}; \beta )$ as a generating function of set-valued tableaux was given by Buch [Reference Skovsted BuchBuc02]. Lam and Pylyavskyy subsequently gave [Reference Lam and PylyavskyyLP07, Thm. 9.15, Prop. 9.22] a combinatorial interpretation of $J_{\lambda }({\mathbf {x}}; \beta )$ , $g_{\lambda }({\mathbf {x}}; \beta )$ , and $j_{\lambda }({\mathbf {x}}; \beta )$ as multiset-valued tableaux, reverse plane partitions, and valued-set tableaux, respectively. Later work of Galashin, Grinberg, and Liu [Reference Galashin, Grinberg and LiuGGL16] then refined the parameter $\beta $ into a family $\boldsymbol {\beta }$ for $g_{\lambda }({\mathbf {x}}; \boldsymbol {\beta })$ , and the dual version $G_{\lambda }({\mathbf {x}}; \boldsymbol {\beta })$ was introduced by Chan and Pflueger [Reference Chan and PfluegerCP21]. In a different direction, Yeliussizov [Reference YeliussizovYel17] introduced a combination of $G_{\lambda }({\mathbf {x}}; \beta )$ and $J_{\lambda }({\mathbf {x}}; \beta )$ as the canonical Grothendieck polynomials and their duals (along with a combinatorial interpretation), and the refined version $G_{\lambda }({\mathbf {x}}; \boldsymbol {\alpha }, \boldsymbol {\beta })$ and the duals $g_{\lambda }({\mathbf {x}}; \boldsymbol {\alpha }, \boldsymbol {\beta })$ were introduced by Hwang et al. [Reference Hwang, Jang, Soo Kim, Song and SongHJK+24, Reference Hwang, Jang, Soo Kim, Song and SongHJK+25] (also with combinatorial formulas). The Schur function decomposition allows these combinatorial objects to be deconstructed into pairs of tableaux by RSK-type algorithms [Reference Lam and PylyavskyyLP07, Reference Hawkes and ScrimshawHS20, Reference Pan, Pappe, Poh and SchillingPPPS22].

The first connection between the TASEP models and the K-theoretic Schubert calculus was noted by Yeliussizov [Reference YeliussizovYel20] when ${\mathbf {x}} = \boldsymbol {\beta } = \sqrt {q}$ , where he showed that the geometric last passage percolation – Case A in [Reference Dieker and WarrenDW08] – probabilities equaled a dual Grothendieck polynomial up to an overall simple factor. This was later generalized to $w_{ji}$ having its parameter $\pi _j x_i$ in [Reference Motegi and ScrimshawMS25]. We can also connect this to a more classical version of TASEP in discrete time on $\mathbb {Z}$ starting from the step initial condition, where the i-th particle moving to the right with probability $x_i$ if the site is free, and $w_{ji}$ records how long it waits before it can move (see, e.g., [Reference JohanssonJoh00] or [Reference Motegi and ScrimshawMS25, App. A]). Taking the obvious time-dependent refinement of [Reference Dieker and WarrenDW08], the determinant formula is readily seen to be the Jacobi–Trudi formula for the natural skew version $g_{\lambda /\mu }({\mathbf {x}}; \boldsymbol {\beta })$ [Reference Amanov and YeliussizovAY22, Reference KimKim22, Reference KimKim21, Reference Hwang, Jang, Soo Kim, Song and SongHJK+24, Reference Hwang, Jang, Soo Kim, Song and SongHJK+25]. By applying the $\omega $ involution (see also specialized versions of the Jacobi–Trudi formulas of [Reference Hwang, Jang, Soo Kim, Song and SongHJK+24, Reference Hwang, Jang, Soo Kim, Song and SongHJK+25]), we also obtain the Bernoulli last passage percolation [Reference Dieker and WarrenDW08, Case D].

However, we are interested in trying to understand the relationship at the level of local dynamics; in particular, to address the question of describing the TASEP dynamics using free fermions. To make this precise, we want to describe the n-step transition kernel $\mathsf {P}_{X,n}(\lambda |\mu )$ in Case X (for $X = A,B,C,D$ as given in [Reference Dieker and WarrenDW08]) from the particles starting at positions $\mu $ and ending at positions $\lambda $ . Building upon the work of the first author [Reference IwaoIwa20, Reference IwaoIwa23, Reference IwaoIwa22], our previous work [Reference Iwao, Motegi and ScrimshawIMS24] introduced a free fermion description of $G_{\lambda /\!\!/\mu }({\mathbf {x}}; \boldsymbol {\alpha }, \boldsymbol {\beta })$ and $g_{\lambda /\mu }({\mathbf {x}}; \boldsymbol {\alpha }, \boldsymbol {\beta })$ . This leads to a Jacobi–Trudi formula [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 4.1] for $G_{\lambda /\!\!/\mu }({\mathbf {x}}; \boldsymbol {\alpha }, \boldsymbol {\beta })$ , which then appropriate specializations give formulas (up to an overall simple factor) for the transition probabilities of the first passage percolation cases [Reference Dieker and WarrenDW08, Case B,C]. Likewise, the generalized Schur operators in [Reference IwaoIwa20], which are acting on partitions and come from the current operators, behave exactly like the TASEP pushing and blocking dynamics. Therefore, our main goal in this paper is to introduce refined versions of these operators from [Reference IwaoIwa20] and show the correspondence between transition probabilities of four types of discrete time TASEP and refined (dual) Grothendieck polynomials. To state the correspondence, here we introduce the four types of TASEP. Let the Weyl chamber $\Omega _\ell $ be

(1.1) $$ \begin{align} \Omega_\ell=\{ (z_1,z_2,\dots,z_\ell) \in \mathbb{Z}^\ell: z_1>z_2>\cdots>z_\ell \}, \end{align} $$

for $\ell \ge 2$ .

We introduce four types of evolution of particles $X_t^{\mathrm {A}}$ , $X_t^{\mathrm {B}}$ , $X_t^{\mathrm {C}}$ and $X_t^{\mathrm {D}}$ on $\Omega _\ell $ . Each process corresponds to case $A,B,C$ and D of the TASEP version of the model by Dieker–Warren [Reference Dieker and WarrenDW08]. We consider the discrete time evolution from time zero to time n with time step one.

(A) Geometric distribution with pushing behavior:

From time t to time $t+1$ , the evolution of the particle system $X_t^{\mathrm {A}} \in \Omega _\ell $ is defined as

(1.2a) $$ \begin{align} X_{t+1}^{\mathrm{A}}(k)=\max(X_t^{\mathrm{A}}(k),X_{t+1}^{\mathrm{A}}(k+1)+1)+\xi^{\mathrm{A}}(k,t+1), \end{align} $$

for $k=1,\dots ,\ell -1$ and $X_{t+1}^{\mathrm {A}}(\ell )=X_t^{\mathrm {A}}(\ell )+\xi ^{\mathrm {A}}(\ell ,t+1)$ . $\xi ^{\mathrm {A}}(k,t)$ are random variables satisfying $\mathsf {P}(\xi ^{\mathrm {A}}(k,t)=r)=(1-\pi _k x_t) (\pi _k x_t)^{r}$ , $r \in \mathbb {Z}_{\ge 0}$ , where $\pi _k$ , $x_t$ are real numbers satisfying $0 < \pi _k x_t <1$ for all $k,t$ .

(B) Bernoulli distribution with blocking behavior:

From time t to time $t+1$ , the evolution of the particle system $X_t^{\mathrm {B}} \in \Omega _\ell $ is defined as

(1.2b) $$ \begin{align} X_{t+1}^{\mathrm{B}}(k)=\min(X_t^{\mathrm{B}}(k)+\xi^{\mathrm{B}}(k,t+1) ,X_{t+1}^{\mathrm{B}}(k-1)-1), \end{align} $$

for $k=2,\dots ,\ell $ and $X_{t+1}^{\mathrm {B}}(1)=X_t^{\mathrm {B}}(1)+\xi ^{\mathrm {B}}(1,t+1)$ . $\xi ^{\mathrm {B}}(k,t)$ are random variables satisfying $\displaystyle \mathsf {P}(\xi ^{\mathrm {B}}(k,t)=1)=\frac {\rho _k x_t}{1+\rho _k x_t}$ , $\displaystyle \mathsf {P}(\xi ^{\mathrm {B}}(k,t)=0)=\frac {1}{1+\rho _k x_t}$ , where $\rho _k$ , $x_t$ are real numbers satisfying $0 < \rho _k x_t$ for all $k,t$ .

(C) Geometric distribution with blocking behavior:

From time t to time $t+1$ , the evolution of the particle system $X_t^{\mathrm {C}} \in \Omega _\ell $ is defined as

(1.2c) $$ \begin{align} X_{t+1}^{\mathrm{C}}(k)=\min(X_t^{\mathrm{C}}(k)+\xi^{\mathrm{C}}(k,t+1),X_{t}^{\mathrm{C}}(k-1)-1), \end{align} $$

for $k=2,\dots ,\ell $ and $X_{t+1}^{\mathrm {C}}(1)=X_t^{\mathrm {C}}(1)+\xi ^{\mathrm {C}}(1,t+1)$ . $\xi ^{\mathrm {C}}(k,t)$ are random variables satisfying $\mathsf {P}(\xi ^{\mathrm {C}}(k,t)=r)=(1-\pi _k x_t) (\pi _k x_t)^{r}$ , $r \in \mathbb {Z}_{\ge 0}$ , where $\pi _k$ , $x_t$ are real numbers satisfying $0 < \pi _k x_t <1$ for all $k,t$ .

(D) Bernoulli distribution with pushing behavior:

From time t to time $t+1$ , the evolution of the particle system $X_t^{\mathrm {D}} \in \Omega _\ell $ is defined as

(1.2d) $$ \begin{align} X_{t+1}^{\mathrm{D}}(k)=\max(X_t^{\mathrm{D}}(k)+\xi^{\mathrm{D}}(k,t+1) ,X_{t+1}^{\mathrm{D}}(k+1)+1), \end{align} $$

for $k=1,\dots ,\ell -1$ and $X_{t+1}^{\mathrm {D}}(\ell )=X_t^{\mathrm {D}}(\ell )+\xi ^{\mathrm {D}}(\ell ,t+1)$ . $\xi ^{\mathrm {D}}(k,t)$ are random variables satisfying $\displaystyle \mathsf {P}(\xi ^{\mathrm {D}}(k,t)=1)=\frac {\rho _k x_t}{1+\rho _k x_t}$ , $\displaystyle \mathsf {P}(\xi ^{\mathrm {D}}(k,t)=0)=\frac {1}{1+\rho _k x_t}$ , where $\rho _k$ , $x_t$ are real numbers satisfying $0 < \rho _k x_t$ for all $k,t$ .

For a sequence of integers $\lambda =(\lambda _1,\lambda _2,\dots ,\lambda _\ell ) (\lambda _1 \ge \lambda _2 \ge \cdots \ge \lambda _\ell )$ , we write $X=\tilde {\lambda }$ if $X \in \Omega _\ell $ satsifies $X(j)=\lambda _j-j$ , $j=1,2,\dots ,\ell $ .

Theorem 1.1. Suppose $\ell (\lambda ), \lambda _1 \leq \ell $ . Suppose $\pi _j x_i \in (0, 1)$ and $\rho _j x_i> 0$ for all i and j. Set $\alpha _j = \rho _{j+1}$ and $\beta _j = \pi _{j+1}$ . The transition probabilities of the four particle systems $X=X^{\mathrm {A}},X^{\mathrm {B}}, X^{\mathrm {C}}, X^{\mathrm {D}}$ from the initial configuration $X_0=\tilde {\mu }$ at time zero to the final configuration $X_n=\tilde {\lambda }$ at time n are given by

$$ \begin{align*} \mathsf{P}(X^{\mathrm{A}}_n=\tilde{\lambda}|X^{\mathrm{A}}_0=\tilde{\mu}) & = \prod_{j=1}^{\ell} \prod_{i=1}^n (1 - \pi_j x_i) \boldsymbol{\pi}^{\lambda/\mu} g_{\lambda/\mu}({\mathbf{x}}_n; \boldsymbol{\pi}^{-1}) =:\mathsf{P}_{A,n}(\lambda | \mu) , \\ \mathsf{P}(X^{\mathrm{B}}_n=\tilde{\lambda}|X^{\mathrm{B}}_0=\tilde{\mu}) & = \frac{\boldsymbol{\rho}^{\lambda/\mu}}{\displaystyle \prod_{i=1}^n (1 + \rho_1 x_i)} J_{\lambda' /\!\!/ \mu'}({\mathbf{x}}_n; \boldsymbol{\alpha})=:\mathsf{P}_{B,n}(\lambda | \mu) , \\ \mathsf{P}(X^{\mathrm{C}}_n=\tilde{\lambda}|X^{\mathrm{C}}_0=\tilde{\mu}) & = \prod_{i=1}^n (1 - \pi_1 x_i) \boldsymbol{\pi}^{\lambda/\mu} G_{\lambda /\!\!/ \mu}({\mathbf{x}}_n; \boldsymbol{\beta}) =:\mathsf{P}_{C,n}(\lambda | \mu) , \\ \mathsf{P}(X^{\mathrm{D}}_n=\tilde{\lambda}|X^{\mathrm{D}}_0=\tilde{\mu}) & = \frac{\boldsymbol{\rho}^{\lambda/\mu}}{\displaystyle\prod_{j=1}^{\ell} \prod_{i=1}^n (1 + \rho_j x_i)} j_{\lambda'/\mu'}({\mathbf{x}}_n; \boldsymbol{\rho}^{-1}) =:\mathsf{P}_{D,n}(\lambda | \mu). \end{align*} $$

Here, $\boldsymbol {\pi }^{\lambda /\mu }=\prod _{j=1}^\ell \pi _j^{\lambda _j-\mu _j} $ and defined similarly for $\boldsymbol {\rho }^{\lambda /\mu }$ . See Section 2.1 for combinatorial definitions of the skew refined Grothendieck polynomials, and/or its dual/weak version. There are several expressions and methods to derive determinant forms of the skew refined (dual) Grothendieck polynomials. One type of determinant forms involves a summation for each of the matrix elements which is often referred to as the Jacobi–Trudi determinant formulas. See [Reference Amanov and YeliussizovAY22, Reference KimKim22, Reference KimKim21] and [Reference Hwang, Jang, Soo Kim, Song and SongHJK+24, Thm. 6.1] for the canonical refined version derived by combinatorial arguments. An algebraic approach using the free fermion technique is introduced in [Reference IwaoIwa20, Reference IwaoIwa23, Reference IwaoIwa22] and the canonical refined version is given in [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 4.1]. These Jacobi–Trudi type determinants together with the overall factors and specialized to ${\mathbf {x}}_n=1$ correspond to determinant forms of transition probabilities for time homogeneous case in [Reference Dieker and WarrenDW08, Thm. 1] and [Reference JohanssonJoh10, Thm. 2.1]. For example, the Jacobi–Trudi determinant for the dual Grothendieck polynomials is given by [Reference Hwang, Jang, Soo Kim, Song and SongHJK+24, Thm. 6.1], [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 4.1], and [Reference Motegi and ScrimshawMS25, Thm. 4.15]:

(1.3) $$ \begin{align} g_{\lambda/\mu}({\mathbf{x}}_n;\mathbf{t}) = \mathrm{det} \Bigg[\sum_{m \ge 0} h_{\lambda_i-\mu_j-i+j-m}({\mathbf{x}}_n ) \alpha_{m}^{ij}(\mathbf{t}) \Bigg]_{i,j=1}^{\ell}, \end{align} $$

where $\alpha _m^{ij}(\mathbf {t})=h_m(t_j,\dots ,t_{i-1})$ (a homogeneous symmetric function) for $i \geq j$ and $\alpha _m^{ij}(\mathbf {t})=e_m(-t_i,\dots ,-t_{j-1})$ (an elementary symmetric function) for $i<j$ . Multiplying by the normalization factor $\prod _{j=1}^{\ell } \prod _{i=1}^n (1 - \pi _j x_i) \boldsymbol {\pi }^{\lambda /\mu }$ gives the determinant form for transition probabilities for Case A [Reference Motegi and ScrimshawMS25, Cor. 4.18], which becomes the expression for the time homogeneous version [Reference Dieker and WarrenDW08, Thm. 1.A] and [Reference JohanssonJoh10, Thm. 2.1] by setting ${\mathbf {x}}_n=1$ (specializing all ${\mathbf {x}}$ -variables to 1). The Jacobi–Trudi determinant expression for the Grothendieck polynomials is [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 4.1]

(1.4) $$ \begin{align} G_{\lambda /\!\!/ \mu}({\mathbf{x}}_n;\mathbf{t}) = \mathrm{det} \Bigg[\sum_{m \ge 0} h_{\lambda_i-\mu_j-i+j+m}({\mathbf{x}}_n ) \beta_{m}^{ij}(\mathbf{t}) \Bigg]_{i,j=1}^{\ell}, \end{align} $$

where $\beta _m^{ij}(\mathbf {t})=h_m(t_i,\dots ,t_{j-1})$ for $i \leq j$ and $\beta _m^{ij}(\mathbf {t})=e_m(-t_j,\dots ,-t_{i-1})$ for $i>j$ , and the determinants for the weak version $j_{\lambda /\!\!/ \mu }({\mathbf {x}}_n;\mathbf {t})$ , $J_{\lambda /\!\!/ \mu }({\mathbf {x}}_n;\mathbf {t})$ are obtained from $g_{\lambda /\!\!/ \mu }({\mathbf {x}}_n;\mathbf {t})$ , $G_{\lambda /\!\!/ \mu }({\mathbf {x}}_n;\mathbf {t})$ respectively, by applying the $\omega $ -involution on the ${\mathbf {x}}$ -variables, which interchanges $h_j({\mathbf {x}}_n)$ and $e_j({\mathbf {x}}_n)$ .

Rewriting each of the sums in the matrix elements of the Jacobi–Trudi determinants as an integral, the determinant forms become the discrete time version of Schütz type formulas. For example, we have [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 4.19]

(1.5) $$ \begin{align} g_{\lambda/\mu}({\mathbf{x}}_n;\mathbf{t}) = \mathrm{det} \Bigg[ \frac{1}{2 \pi \mathbf{i}} \oint_{\gamma} \frac{\prod_{k=1}^{j-1}(1-t_k z) }{\prod_{k=1}^{i-1}(1-t_k z) } \frac{1}{\prod_{k=1}^n (1-x_k z) z^{\lambda_i-\mu_j-i+j+1}} dz \Bigg]_{i,j=1}^{\ell}, \end{align} $$

where $\gamma $ is a small circle centered at the origin, and changing the integration variable to its inverse and multiplying overall factors gives [Reference Johansson and RahmanJR22, Thm. 1]. This type of determinant forms is the starting point to finally obtain the Fredholm determinant expressions for multipoint distributions in several papers. See [Reference Johansson and RahmanJR22] for Case A and more generally by [Reference Matetski and RemenikMR23b, Reference Matetski and RemenikMR23a] for all cases which they further generalized to more generic models of sequential and parallel updates. See also [Reference Bisi, Liao, Saenz and ZygourasBLSZ23] corresponding to Case B, where a nonintersecting lattice path construction of transition probabilities was given which further lead them to the Fredholm determinant expressions. There are several formulas for the refined Grothendieck polynomials and its variants derived in [Reference Hwang, Jang, Soo Kim, Song and SongHJK+24, Reference Hwang, Jang, Soo Kim, Song and SongHJK+25, Reference Iwao, Motegi and ScrimshawIMS24] for example, which may be useful for further studies on transition probabilities. We discuss some applications in Section 6.

We also make a comment that a different version of inhomogeneous TASEP (space inhomogeneous version) was studied in [Reference AssiotisAss20, Reference PetrovPet20], and correlation kernels were obtained as determinants with matrix elements involving double integrals. In [Reference AssiotisAss20], this was done by introducing and generalizing the analysis of the process on Gelfand–Tsetlin patterns originally due to [Reference Borodin and FerrariBF14], and in [Reference PetrovPet20] this was derived in a different way by identifying with some certain Schur process. It is an open question to derive these types of determinants directly from the approach used in this paper.

Later, rather than the TASEP version, we use the equivalent bosonic version by Dieker–Warren in [Reference Dieker and WarrenDW08], which is more suitable for the proofs given in this paper, that where the i-th particle is shifted by i steps to the right. See Section 2.4 for the precise statement.

Our proof shows these generalized Schur operators satisfy the Knuth relations, which has proven useful in the study of symmetric functions, such as in [Reference Fomin and GreeneFG98, Reference FominFom95], and we then use the Markov property to reduce the equivalence to a computation for $n = 1$ . We give two extensions that could be used to build the K-theoretic symmetric functions, but only the ones for $G_{\lambda /\!\!/\mu }({\mathbf {x}}; \boldsymbol {\alpha }, \boldsymbol {\beta })$ satisfy the Knuth relations (Theorem 3.2 and [Reference Iwao, Motegi and ScrimshawIMS23, Thm.-3.11]). This allows us to describe the transition functions for a new particle process in Section 8 that involves $\boldsymbol {\alpha }$ acting as “local current” parameters, where the rate depends on the position of the particle. We prove the analogous result to Theorem 1.1 for this new particle process in Theorem 8.1 with $G_{\lambda /\!\!/\mu }({\mathbf {x}}; \boldsymbol {\alpha }, \boldsymbol {\beta })$ . We also describe extensions of this process to the other cases. However, this process can only be described “bosonically,” hence it cannot be applied to the slow bond problem (which requires a “fermionic” presentation; see Remark 8.2). When $\pi _i = 0$ , our new model becomes a special case of the model from [Reference Knizel, Petrov and SaenzKPS19] (see Remark 8.5 for the precise relationship).

We also give another proof of Theorem 1.1 in Section 5 through a direct combinatorial bijection between the tableaux description using more refined conditional probabilities. Essentially, it is given by showing the branching rules [Reference Iwao, Motegi and ScrimshawIMS24, Prop. 4.5] corresponds to the Markov property and showing the result directly for $n = 1$ . Our proof can be seen as analogous to using the (dual) RSK bijection and the Schur decomposition in Case A and Case D that was described in [Reference Motegi and ScrimshawMS25] (compare also the intertwining kernels of [Reference Dieker and WarrenDW08] with [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 4.4]). As a consequence, we can explicitly describe how the tableaux encode the movement of the particles.

Let us mention one technical point about the Case C result in Theorem 1.1. We need to take the Taylor series expansion of $f(\zeta ) = (1 + \zeta )^{-1}$ (around $\zeta = 0$ ) in order to obtain the equality with the combinatorial formula with $J_{\lambda /\!\!/ \mu }({\mathbf {x}}_n; \boldsymbol {\alpha })$ . Thus, strictly speaking, to do the Taylor series expansion we should require that $\rho _i x_j \in (0, 1)$ . We can make a change to the combinatorial description in terms of rational functions to address this, and, in principle, we would then need to prove new free fermionic and Jacobi–Trudi formulas. We leave this for the interested reader.

We describe some additional results we obtain from Theorem 1.1. Using the skew Cauchy identity [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 4.6], we give determinantal formulas for the multipoint distributions in Section 6 for all cases. We also give another proof for Case A using a refinement of [Reference Motegi and ScrimshawMS25, Cor. 3.14], but this expansion has a natural geometric interpretation as coming from the K-homology classes of structure sheaves of Schubert varieties studied in the work of Takigiku [Reference TakigikuTak18a, Reference TakigikuTak18b] (see also [Reference TakigikuTak19]) at $\boldsymbol {\beta } = 1$ (as opposed to ideal sheaves of boundaries of Schubert varieties in [Reference Lam and PylyavskyyLP07]). For the Case C multipoint distributions, a similar geometric construction for the underlying symmetric functions can likely be given from [Reference Wheeler and Zinn-JustinWZJ19]. It would be interesting to see how these geometry interpretations relate to the integrability and determinant (and integral) formulas. We show that taking the continuous time limit for the blocking behavior recovers the classical continuous time TASEP. We prove that the pushing behavior satisfies the same master equation, but with different boundary conditions.

Let us discuss how our results relate to two other independent works that appeared while this paper was being prepared. The first is by Bisi, Liao, Saenz, and Zygouras [Reference Bisi, Liao, Saenz and ZygourasBLSZ23] that also studied the time-dependent version of [Reference Dieker and WarrenDW08, Case B]. However, the techniques and results in their work [Reference Bisi, Liao, Saenz and ZygourasBLSZ23] are (generally) different than what we obtained here. It is likely that our results could lead to new proofs of some of the formulas in [Reference Bisi, Liao, Saenz and ZygourasBLSZ23]. The second is a Grothendieck measure that was studied in [Reference Gavrilova and PetrovGP24], which with $\boldsymbol {\beta } = \beta $ is coming from the Cauchy-type identity in [Reference Motegi and SakaiMS13, Cor. 5.4] with the dual Grothendieck functions $\overline {G}_{\lambda }({\mathbf {y}}; \boldsymbol {\beta })$ being rescaled. This Cauchy-type identity is different than the one considered in [Reference Hwang, Jang, Soo Kim, Song and SongHJK+24, Reference Hwang, Jang, Soo Kim, Song and SongHJK+25, Reference Iwao, Motegi and ScrimshawIMS24] (instead it appears to be related to [Reference Motegi and SakaiMS13]; see also [Reference Gorbounov and KorffGK17]), and so it does not relate to our results. Lastly, shortly before this paper appeared on the ar $\chi $ iv, independent work by Assiotis [Reference AssiotisAss23] also was posted, where Assiotis also studied a position-inhomogeneous version of the TASEP cases we study here. More specifically, determinantal correlation functions are constructed with some limit results are proven based on generalizing Toeplitz matrices and intertwinings of Markov semigroups are given in [Reference AssiotisAss23]. Thus, [Reference AssiotisAss23] has related but distinct results from ours using different techniques.

We conclude the introduction by mentioning some potential applications of our work. Since we are allowing any initial configuration $\mu $ with finite distance from the step initial condition, we can approximate the flat initial condition by taking $\mu $ to be a sufficiently large staircase partition. As such, we expect to be able to compute generalizations of results on flat initial conditions such as [Reference Borodin, Ferrari, Prähofer and SasamotoBFPS07, Reference Borodin, Ferrari and SasamotoBFS08] with having probabilities depend on the particles. Furthermore, we believe that limit shapes/densities can be computed using the fermionic Fock space description following [Reference OkounkovOko01, Reference Okounkov and ReshetikhinOR03], although currently an explicit description of the projection operator as an element in the Clifford algebra is not known. However, because of [Reference Gavrilova and PetrovGP24, Prop. 1.2], it is likely that the projection operator cannot be used with Wick’s theorem.

This paper is organized as follows. In Section 2, we give some background on refined (dual) Grothendieck polynomials, the related combinatorics, and the stochastic processes we consider. In Section 3, we describe our Schur operators for canonical and dual Grothendieck polynomials. In Section 4, we prove Theorem 1.1 using our Schur operators. In Section 5, we prove Theorem 1.1 using a direct combinatorial argument. In Section 6, we give our formulas for the multipoint distributions in each case. In Section 7, we show the continuous time limits of our TASEP processes. In Section 8, we describe our new blocking-behavior particle process with “local current” parameters $\boldsymbol {\alpha }$ that is the common generalization of Case B and Case C. In Section 9, we offer some concluding remarks on our work.

Numerous additional examples can be found on the arXiv version of this paper [Reference Iwao, Motegi and ScrimshawIMS23].

2 Background

Let $\lambda = (\lambda _1, \lambda _2, \dotsc , \lambda _{\ell })$ be a , a weakly decreasing finite sequence of positive integers. We denote the set of all partitions by $\mathcal {P}$ . We draw the Young diagrams of our partitions using English convention. We will often extend partitions with additional entries at the end being $0$ , and let $\ell (\lambda )$ denote the largest index $\ell $ such that $\lambda _{\ell }> 0$ . Let $\lambda '$ denote the conjugate partition. We often write our partitions as words. A is a partition $\lambda $ of the form $a1^{m} = (a, 1, \dotsc , 1)$ with $1$ appearing m times, where the is $a-1$ and the is m. For $\mu \subseteq \lambda $ , a skew shape $\lambda / \mu $ is the Young diagram formed by removing $\mu $ from $\lambda $ , and we identity $\lambda / \emptyset = \lambda $ .

Let ${\mathbf {x}} = (x_1, x_2, \ldots )$ denote a countably infinite sequence of indeterminates. We will often set all but finitely many of the indeterminates ${\mathbf {x}}$ to $0$ , which we denote as ${\mathbf {x}}_n := (x_1, \dotsc , x_n, 0, 0, \ldots )$ . We make similar definitions for any other sequence of indeterminates, such as ${\mathbf {y}} = (y_1, y_2, \ldots )$ . We also require infinite sequences of parameters $\boldsymbol {\alpha } = (\alpha _1, \alpha _2, \ldots )$ and $\boldsymbol {\beta } = (\beta _1, \beta _2, \ldots )$ , which we often treat as indeterminates.

2.1 K-theoretic symmetric functions

A is a filling of a Young diagram $\lambda / \mu $ with positive integers such that the rows are weakly increasing left-to-right and strictly increasing top-to-bottom. The of a semistandard tableau T is

$$\begin{align*}\operatorname{\mathrm{wt}}(T) = \prod_{i=1}^{\infty} x_i^{m_i}, \end{align*}$$

where $m_i$ is the number of i’s that appear in T. This is a finite product since there are finitely many boxes in $\lambda / \mu $ .

The is the generating function

$$\begin{align*}s_{\lambda / \mu}({\mathbf{x}}) = \sum_T \operatorname{\mathrm{wt}}(T), \end{align*}$$

where we sum over all semistandard tableaux T of shape $\lambda / \mu $ . The Schur functions $\{ s_{\lambda } \}_{\lambda \in \mathcal {P}}$ form a basis for the ring of symmetric functions, and so we can define the by declaring the Schur functions are an orthonormal basis

$$\begin{align*}\left\langle s_{\lambda}, s_{\mu} \right\rangle = \delta_{\lambda\mu}. \end{align*}$$

Furthermore, we have a natural grading on symmetric functions by having the degree of $s_{\lambda }$ be $\left \lvert \lambda \right \rvert $ . There is also an algebra involution defined by $\omega s_{\lambda / \mu } = s_{\lambda ' / \mu '}$ . We refer the reader to [Reference StanleySta99, Ch. 7] and [Reference MacdonaldMac15, Ch. I] for more details on symmetric functions.

A of skew shape $\lambda / \mu $ is a filling of the Young diagram by hook shaped tableau satisfying the local conditions

(provided the requisite box exists). Note that this is a generalization of the semistandard conditions as the conditions are equivalent to the usual standard ones when $\mathsf {a},\mathsf {b},\mathsf {c}$ all consist of a single entry.

For $\mu \subseteq \lambda $ , the Footnote 2 is the generating function

$$\begin{align*}G_{\lambda / \mu}({\mathbf{x}}; \boldsymbol{\alpha}, \boldsymbol{\beta}) = \sum_T \prod_{\mathsf{b} \in T} (-\alpha_i)^{a(\mathsf{b})} (-\beta_j)^{b(\mathsf{b})} \operatorname{\mathrm{wt}}(\mathsf{b}), \end{align*}$$

where we sum over all hook-valued tableaux T of shape $\lambda / \mu $ , product over all entries $\mathsf {b}$ in T with $a(\mathsf {b})$ (resp. $b(\mathsf {b})$ ) the arm (resp. leg) of the shape of $\mathsf {b}$ and i (resp. j) the row (resp. column) of the entry. We indicate various specializations and relation with the literature in Table 1. We note that technically the canonical Grothendieck functions lie in the completion of the ring of symmetric functions given by the grading; so we are allowed to take infinite sums with finite sums in each graded component. However, this does not affect our computations or results, and so we suppress this distinction in this paper. A basis for (the completion of) symmetric functions is given by $\{G_{\lambda }({\mathbf {x}}; \boldsymbol {\alpha }, \boldsymbol {\beta })\}_{\lambda \in \mathcal {P}}$ since

$$\begin{align*}G_{\lambda}({\mathbf{x}}; \boldsymbol{\alpha}, \boldsymbol{\beta}) = s_{\lambda}({\mathbf{x}}) + \sum_{\lambda \subsetneq \mu} (-1)^{\left\lvert \mu \right\rvert - \left\lvert \lambda \right\rvert} E_{\lambda}^{\mu}(\boldsymbol{\alpha}, \boldsymbol{\beta}) s_{\mu}({\mathbf{x}}), \end{align*}$$

where $E_{\lambda }^{\mu }(\boldsymbol {\alpha }, \boldsymbol {\beta }) \in \mathbb {Z}_{\geq 0}[\boldsymbol {\alpha }, \boldsymbol {\beta }]$ [Reference Hawkes and ScrimshawHS20, Reference Hwang, Jang, Soo Kim, Song and SongHJK+24, Reference Hwang, Jang, Soo Kim, Song and SongHJK+25].

Table 1 The relationship between our sign choices and some other papers in the literature.

We note that if $\boldsymbol {\alpha } = 0$ , then all entries of the hook-valued tableau must be column shapes, which we can equate with sets and recovers the set-valued tableau description of [Reference Chan and PfluegerCP21], which refines [Reference Skovsted BuchBuc02]. Likewise, if $\boldsymbol {\beta } = 0$ , then we have row shapes that we equate with multisets, refining [Reference Lam and PylyavskyyLP07]. Similarly, we call $G_{\lambda }({\mathbf {x}}; \boldsymbol {\beta }) := G_{\lambda }({\mathbf {x}}; 0, \boldsymbol {\beta })$ a Footnote 3 and $J_{\lambda }({\mathbf {x}}; \boldsymbol {\alpha }) := G_{\lambda }({\mathbf {x}}; \boldsymbol {\alpha }, 0)$ a .

The are defined as the dual basis to the canonical Grothendieck functions under the Hall inner product. A combinatorial definition was given in [Reference Hwang, Jang, Soo Kim, Song and SongHJK+25], which can be seen as the obvious refinement of the rim border tableaux description of [Reference YeliussizovYel17]. As such, we can extend the definition to skew shapes. For our purposes, we will only use the dual basis to the (weak) Grothendieck functions (that is, $\boldsymbol {\alpha } = 0$ or $\boldsymbol {\beta } = 0$ ), and thus we restrict to describing the combinatorics of the special cases for the dual basis to the (weak) Grothendieck functions following [Reference Lam and PylyavskyyLP07, Thm. 9.15]. A (resp. ) is a semistandard Young tableau where we are allowed to merge boxes in the same column (resp. row) with the entry considered to be aligned at the bottom (resp. right). The weight is the same as for semistandard tableaux. We then have the generating functions

$$ \begin{align*} g_{\lambda/\mu}({\mathbf{x}}; \boldsymbol{\beta}) &= g_{\lambda/\mu}({\mathbf{x}}; 0, \boldsymbol{\beta}) = \sum_T \prod_{j=1}^{\ell-1} \beta_j^{r_j} \operatorname{\mathrm{wt}}(T), \\ j_{\lambda/\mu}({\mathbf{x}}; \boldsymbol{\beta}) &= g_{\lambda/\mu}({\mathbf{x}}; \boldsymbol{\alpha}, 0) = \sum_T \prod_{j=1}^{\lambda_1-1} \alpha_j^{c_j} \operatorname{\mathrm{wt}}(T), \end{align*} $$

where $r_j$ (resp. $c_j$ ) is the number of boxes in row (resp. column) j that have been merged with the box below (resp. to the right) and we sum over all reverse plane partitions (resp. valued-set tableaux) of shape $\lambda / \mu $ . The definition of valued-set tableau follows [Reference Hawkes and ScrimshawHS20], which is conjugate to that of [Reference Lam and PylyavskyyLP07].

We note that our description of reverse plane partitions matches the classical definition by simply filling in the merged boxes with the entry of the merged box. The inverse map is merging duplicated entries in the same column. Thus, the description of $g_{\lambda / \mu }({\mathbf {x}}; \boldsymbol {\beta })$ matches that introduced in [Reference Galashin, Grinberg and LiuGGL16]. We have described reverse plane partitions as above to better demonstrate the following symmetry that motivated the definition of canonical Grothendieck polynomials, even though we will write our reverse plane partitions below using the classical description.

Theorem 2.1 [Reference Hwang, Jang, Soo Kim, Song and SongHJK+24, Thm. 1.7].

We have

$$\begin{align*}\omega G_{\lambda/\mu}({\mathbf{x}}; \boldsymbol{\alpha}, \boldsymbol{\beta}) = G_{\lambda'/\mu'}({\mathbf{x}}; \boldsymbol{\beta}, \boldsymbol{\alpha}), \qquad\qquad \omega g_{\lambda/\mu}({\mathbf{x}}; \boldsymbol{\alpha}, \boldsymbol{\beta}) = g_{\lambda'/\mu'}({\mathbf{x}}; \boldsymbol{\beta}, \boldsymbol{\alpha}). \end{align*}$$

As a consequence of Theorem 2.1, we have the relationship

$$\begin{align*}\omega G_{\lambda/\mu}({\mathbf{x}}; \boldsymbol{\alpha}) = J_{\lambda/\mu}({\mathbf{x}}; \boldsymbol{\alpha}), \qquad\qquad \omega g_{\lambda/\mu}({\mathbf{x}}; \boldsymbol{\alpha}) = j_{\lambda/\mu}({\mathbf{x}}; \boldsymbol{\alpha}), \end{align*}$$

between the (dual) weak Grothendieck functions and the (dual) Grothendieck functions. These are a refinement of [Reference Lam and PylyavskyyLP07, Prop. 9.22].

For the canonical Grothendieck polynomials, we note that the skew shape description is not natural from the branching rules (a precise description is given below), the skewing operator [Reference IwaoIwa22, Sec. 4], nor coproduct formula [Reference Skovsted BuchBuc02, Sec. 5] perspective. Refining [Reference Skovsted BuchBuc02, Eq. (6.4)] and [Reference YeliussizovYel17, Prop. 8.8], we define [Reference Iwao, Motegi and ScrimshawIMS24, Sec. 4.1]

(2.1) $$ \begin{align} G_{\lambda/\!\!/\mu}({\mathbf{x}}; \boldsymbol{\alpha}, \boldsymbol{\beta}) := \sum_{\nu \subseteq \mu} \prod_{(i,j) \in \mu/\nu} -(\alpha_i + \beta_j) G_{\lambda / \nu}({\mathbf{x}}; \boldsymbol{\alpha}, \boldsymbol{\beta}), \end{align} $$

where $\nu $ is formed by removing some of the corners of $\mu $ (that is, boxes $(i, \mu _i)$ such that $\mu _i> \mu _{i+1}$ ). We remark that we have the following identity by the same proof as [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 4.4].

Proposition 2.2. We have

$$\begin{align*}\omega G_{\lambda/\!\!/\mu}({\mathbf{x}}; \boldsymbol{\alpha}, \boldsymbol{\beta}) = G_{\lambda'/\!\!/\mu'}({\mathbf{x}}; \boldsymbol{\beta}, \boldsymbol{\alpha}). \end{align*}$$

Equation (2.1) allows us to give the following (cf. [Reference YeliussizovYel17, Prop. 8.7, 8.8]).

Proposition 2.3 (Branching rules [Reference Iwao, Motegi and ScrimshawIMS24, Prop. 4.5]).

We have

$$ \begin{align*} G_{\lambda/\mu}({\mathbf{x}}, {\mathbf{y}}; \boldsymbol{\alpha}, \boldsymbol{\beta}) & = \sum_{\nu \subseteq \lambda} G_{\lambda/\!\!/\nu}({\mathbf{y}}; \boldsymbol{\alpha}, \boldsymbol{\beta}) G_{\nu/\mu}({\mathbf{x}}; \boldsymbol{\alpha}, \boldsymbol{\beta}), \\ G_{\lambda/\!\!/\mu}({\mathbf{x}}, {\mathbf{y}}; \boldsymbol{\alpha}, \boldsymbol{\beta}) & = \sum_{\mu \subseteq \nu \subseteq \lambda} G_{\lambda/\!\!/\nu}({\mathbf{y}}; \boldsymbol{\alpha}, \boldsymbol{\beta}) G_{\nu/\!\!/\mu}({\mathbf{x}}; \boldsymbol{\alpha}, \boldsymbol{\beta}), \\ g_{\lambda/\mu}({\mathbf{x}}, {\mathbf{y}}; \boldsymbol{\alpha}, \boldsymbol{\beta}) & = \sum_{\mu \subseteq \nu \subseteq \lambda} g_{\lambda / \nu}({\mathbf{y}}; \boldsymbol{\alpha}, \boldsymbol{\beta}) g_{\nu/\mu}({\mathbf{x}}; \boldsymbol{\alpha}, \boldsymbol{\beta}). \end{align*} $$

As was shown in [Reference Iwao, Motegi and ScrimshawIMS24, Eq. (4.3)], we have

$$\begin{align*}G_{\lambda/\mu}({\mathbf{x}}; \boldsymbol{\alpha}, \boldsymbol{\beta})=\prod_{(i,j)\in \mu/\lambda}(\alpha_i+\beta_j)\cdot G_{\lambda/(\lambda\cap \mu)}({\mathbf{x}}; \boldsymbol{\alpha}, \boldsymbol{\beta}), \end{align*}$$

and in particular, we do not have the vanishing property for $G_{\lambda /\mu }({\mathbf {x}}; \boldsymbol {\alpha }, \boldsymbol {\beta })$ that $G_{\lambda /\!\!/\mu }({\mathbf {x}}; \boldsymbol {\alpha }, \boldsymbol {\beta })$ exhibits: that $G_{\lambda /\!\!/\mu }({\mathbf {x}}; \boldsymbol {\alpha }, \boldsymbol {\beta }) = 0$ whenever $\mu \subseteq \lambda $ .

We will also need the skew Cauchy formula from [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 4.6] (nonskew versions can be found in [Reference Hwang, Jang, Soo Kim, Song and SongHJK+24, Reference Hwang, Jang, Soo Kim, Song and SongHJK+25] or as a consequence of [Reference Chan and PfluegerCP21, Rem. 3.9]). This is a refined version of [Reference YeliussizovYel19, Thm. 1.1].

Theorem 2.4 (Skew Cauchy formula).

We have

$$\begin{align*}\sum_{\lambda} G_{\lambda /\!\!/ \mu}({\mathbf{x}}; \boldsymbol{\alpha}, \boldsymbol{\beta}) g_{\lambda / \nu}({\mathbf{y}}; \boldsymbol{\alpha}, \boldsymbol{\beta}) = \prod_{i,j} \frac{1}{1 - x_i y_j} \sum_{\eta} G_{\nu /\!\!/ \eta}({\mathbf{x}}; \boldsymbol{\alpha}, \boldsymbol{\beta})g_{\mu / \eta}({\mathbf{y}}; \boldsymbol{\alpha}, \boldsymbol{\beta}). \end{align*}$$

2.2 Supersymmetric functions

We set some additional standard notation from symmetric function theory. We (again) refer the reader to the standard textbooks [Reference MacdonaldMac15, Reference StanleySta99] for more information. Let

$$ \begin{align*} e_m({\mathbf{x}}) & = \sum_{i_1> \cdots > i_m} x_{i_1} \dotsm x_{i_m}, & e_{\lambda}({\mathbf{x}}) & = e_{\lambda_1} \cdots e_{\lambda_{\ell}}, \\ h_m({\mathbf{x}}) & = \sum_{i_1 \leq \cdots \leq i_m} x_{i_1} \dotsm x_{i_m}, & h_{\lambda}({\mathbf{x}}) & = h_{\lambda_1} \cdots h_{\lambda_{\ell}}, \\ p_m({\mathbf{x}}) & = \sum_{i=1}^{\infty} x_i^m, & p_{\lambda}({\mathbf{x}}) & = p_{\lambda_1} \cdots p_{\lambda_{\ell}}, \end{align*} $$

denote the elementary, homogeneous, and power sum symmetric functions, respectively. We consider $e_0({\mathbf {x}}) = h_0({\mathbf {x}}) = p_0({\mathbf {x}}) = 1$ . We have $\omega p_m({\mathbf {x}}) = (-1)^{m-1} p_m({\mathbf {x}}) = -p_m(-{\mathbf {x}})$ for $m> 0$ . Since, the powersum symmetric functions generate the ring of symmetric functions as a polynomial ring (over $\mathbb {Q}$ ) and the monomials (in the variables $p_m := p_m({\mathbf {x}})$ ) form a basis. Thus, we can define polynomials by the equations

$$\begin{align*}E_{\mu}(p_1, p_2, \ldots) = e_{\mu}({\mathbf{x}}), \qquad\qquad H_{\mu}(p_1, p_2, \ldots) = h_{\mu}({\mathbf{x}}), \qquad\qquad S_{\mu}(p_1, p_2, \ldots) = s_{\mu}({\mathbf{x}}). \end{align*}$$

For example, $E_{21}(p_1, p_2, \ldots ) = \frac {1}{2} p_1^3 - \frac {1}{2} p_2 p_1$ .

Now we recall some particular ; we refer the reader to [Reference MacdonaldMac15, Ch. I] for more details. We define the supersymmetric elementary, homogeneous, powersum, and Schur functions as

$$ \begin{align*} e_m({\mathbf{x}}/{\mathbf{y}}) & = \sum_{k=0}^m (-1)^{m-k} e_k({\mathbf{x}}) h_{m-k}({\mathbf{y}}), \\ h_m({\mathbf{x}}/{\mathbf{y}}) & = \sum_{k=0}^m (-1)^{m-k} h_k({\mathbf{x}}) e_{m-k}({\mathbf{y}}), \\ p_m({\mathbf{x}}/{\mathbf{y}}) & = p_m({\mathbf{x}}) - p_m({\mathbf{y}}), \\ s_{\lambda}({\mathbf{x}}/{\mathbf{y}}) & = \sum_{\mu} (-1)^{\left\lvert \lambda \right\rvert - \left\lvert \mu \right\rvert} s_{\mu}({\mathbf{x}}) s_{\lambda' / \mu'}({\mathbf{y}}). \end{align*} $$

When ${\mathbf {y}} = \emptyset $ , that is we have set all of the ${\mathbf {y}}$ indeterminates to $0$ , we have $f({\mathbf {x}} / {\mathbf {y}}) = f({\mathbf {x}})$ for any supersymmetric function f. The involution $\omega $ also extends to supersymmetric functions by $ \omega s_{\lambda / \mu }({\mathbf {x}} / {\mathbf {y}}) = s_{\lambda '/\mu '}({\mathbf {x}} / {\mathbf {y}}). $

The supersymmetric functions can also be described in terms of plethystic substitution. While we will not give a detailed account, we will briefly review the relevant descriptions for understanding the results in [Reference Hwang, Jang, Soo Kim, Song and SongHJK+24, Reference Hwang, Jang, Soo Kim, Song and SongHJK+25] and refer the reader to [Reference Loehr and RemmelLR11] and [Reference MacdonaldMac15, Ch. I] for a more detailed description. Let $X = x_1 + x_2 + \cdots $ and $Y = y_1 + y_2 + \cdots $ . For a symmetric function f, we define $f[X] = f(x_1, x_2, \ldots )$ , and if $Z = z_1 + z_2 + \cdots + z_n$ , then we have $f[Z] = f(z_1, z_2, \dotsc , z_n, 0, 0, \ldots )$ . We also can define

$$ \begin{align*} h_m[X - Y] = h_m({\mathbf{x}}/{\mathbf{y}}), \qquad\qquad e_m[X - Y] = e_m({\mathbf{x}}/{\mathbf{y}}), \qquad\qquad p_m[X - Y] = p_m({\mathbf{x}}/{\mathbf{y}}). \end{align*} $$

As a consequence, we have that $h_m[-Y] = (-1)^m e_m({\mathbf {y}})$ and $e_m[-Y] = (-1)^m h_m({\mathbf {y}})$ . Furthermore, we have the well-known plethystic identities

$$ \begin{align*} h_m\big( ({\mathbf{x}} \sqcup {\mathbf{x}}^{\prime})/({\mathbf{y}} \sqcup {\mathbf{y}}^{\prime}) \big) & = h_m[X + X^{\prime} - Y - Y^{\prime}] = \sum_{a+b=m} h_a({\mathbf{x}}/{\mathbf{y}}) h_b({\mathbf{x}}^{\prime}/{\mathbf{y}}^{\prime}), \\ e_m\big( ({\mathbf{x}} \sqcup {\mathbf{x}}^{\prime})/({\mathbf{y}} \sqcup {\mathbf{y}}^{\prime}) \big) & = e_m[X + X^{\prime} - Y - Y^{\prime}] = \sum_{a+b=m} e_a({\mathbf{x}}/{\mathbf{y}}) e_b({\mathbf{x}}^{\prime}/{\mathbf{y}}^{\prime}), \end{align*} $$

(see, e.g., [Reference Hwang, Jang, Soo Kim, Song and SongHJK+24, Prop. 2.1]). Next, we recall the notation given in [Reference Hwang, Jang, Soo Kim, Song and SongHJK+24, Def. 2.3]:

$$\begin{align*}h_m[X \ominus Y] := \sum_{a-b=m} h_a[X] h_b[Y], \qquad\qquad e_m[X \ominus Y] := \sum_{a-b=m} e_a[X] e_b[Y]. \end{align*}$$

We note that these can have infinite nonzero terms and be nonzero even when m is negative.

In order to avoid confusion with the plethystic negative and negating the variables, we will not use plethystic notation, and instead follow [Reference Iwao, Motegi and ScrimshawIMS24], where we write $h_m({\mathbf {x}} /\!\!/ {\mathbf {y}}) := h_m[X \ominus Y]$ and $e_m({\mathbf {x}} /\!\!/ {\mathbf {y}}) := e_m[X \ominus Y]$ .

Additionally, note the ordering of the variables for the elementary symmetric functions. We have chosen this so that the definitions extend to the introduced by Fomin and Greene [Reference Fomin and GreeneFG98]. We summarize the results that we need as follows. For a sequence of linear operators $\{\tau _i \colon {\mathbf {k}}[{\mathcal {P}}] \to \mathbf {k}[\mathcal {P}] \}_{i=1}^{\infty }$ , the are

(2.2a) $$ \begin{align} \tau_j \tau_i \tau_k & = \tau_j \tau_k \tau_i & &\text{for all } i \geq j> k, \quad i - k \geq 2, \end{align} $$
(2.2b) $$ \begin{align} \tau_i \tau_k \tau_j & = \tau_k \tau_i \tau_j & &\text{for all } i> j \geq k, \quad i - k \geq 2,\end{align} $$
(2.2c) $$ \begin{align} (\tau_i + \tau_{i+1}) \tau_{i+1} \tau_i & = \tau_{i+1} \tau_i (\tau_i + \tau_{i+1}) & & \text{for all } i. \end{align} $$

The (strong) Knuth relations are formed by removing the requirement $i - k \geq 2$ from (2.2).

Theorem 2.5 [Reference Fomin and GreeneFG98].

Let $\{\tau _i \colon \mathbf {k}[\mathcal {P}] \to \mathbf {k}[\mathcal {P}] \}_{i=1}^{\infty }$ denote a sequence of linear operators that satisfy the weak Knuth relations. Then the Schur functions $\{ s_{\lambda }(\tau _1, \tau _2, \ldots ) \}_{\lambda \in \mathcal {P}}$ commute and $s_{\nu /\lambda }(\tau _1, \tau _2, \ldots ) = \sum _{\mu } c_{\lambda ,\mu }^{\nu } s_{\mu }(\tau _1, \tau _2, \ldots )$ , where $c_{\lambda ,\mu }^{\nu }$ are the usual Littlewood–Richardson coefficients.

2.3 Free fermions and Schur operators

We describe the free-fermion presentation of the (dual) canonical Grothendieck polynomials from [Reference Iwao, Motegi and ScrimshawIMS24]. For more details, we refer the reader to [Reference Alexandrov and ZabrodinAZ13, Reference KacKac90, Reference Miwa, Jimbo and DateMJD00]. Let $\mathbf {k}$ be a field of characteristic $0$ . We consider the unital associative $\mathbf {k}$ -algebra of generated by $\{\psi _n, \psi _n^* \mid n \in \mathbb {Z}\}$ with relations

$$\begin{align*}\psi_m \psi_n + \psi_n \psi_m = \psi_m^* \psi_n^* + \psi_n^* \psi_m^* = 0, \qquad\qquad \psi_m \psi_n^* + \psi_n^* \psi_m = \delta_{m,n}, \end{align*}$$

known as the canonical anticommuting relations. This is a Clifford algebra arising from the canonical bilinear form of an infinite dimensional vector space V with a basis $\{v_i\}_{i \in \mathbb {Z}}$ with its (restricted) dual space $V^*$ spanned by $\{v_i^*\}_{i \in \mathbb {Z}}$ . As such, there is an antialgebra involution on $\mathcal {A}$ defined by $\psi _n \leftrightarrow \psi _n^\ast $ ; that is $(xy)^\ast =y^\ast x^\ast $ for any $x,y\in \mathcal {A}$ . We will also define the fields

$$\begin{align*}\psi(z) = \sum_{n \in \mathbb{Z}} \psi_n z^n, \qquad\qquad \psi^*(w) = \sum_{n \in \mathbb{Z}} \psi_n w^{-n}. \end{align*}$$

The Footnote 4 are defined as

$$\begin{align*}a_k := \sum_{i \in \mathbb{Z}} \psi_i \psi_{i+k}^*, \end{align*}$$

and satisfy the Heisenberg algebra relations and duality

$$\begin{align*}[a_m, a_k] = m \delta_{m,-k}, \qquad\qquad a_k^* = a_{-k}. \end{align*}$$

We will use the

$$\begin{align*}H({\mathbf{x}} / {\mathbf{y}}) := \sum_{k> 0} \frac{p_k({\mathbf{x}}/{\mathbf{y}})}{k} a_k, \end{align*}$$

and the corresponding . These satisfy the relations (see, e.g., [Reference IwaoIwa23, Eq. (17), Eq. (18)])

(2.3a) $$ \begin{align} e^{H({\mathbf{x}}/{\mathbf{y}})} \psi_k e^{-H({\mathbf{x}}/{\mathbf{y}})}& = \sum_{i=0}^{\infty} h_i({\mathbf{x}}/{\mathbf{y}}) \psi_{k-i}, \end{align} $$
(2.3b) $$ \begin{align} e^{-H({\mathbf{x}}/{\mathbf{y}})} \psi^*_k e^{H({\mathbf{x}}/{\mathbf{y}})}& = \sum_{i=0}^{\infty} h_i({\mathbf{x}}/{\mathbf{y}}) \psi^*_{k+i}. \end{align} $$

Note that $-H({\mathbf {x}}/{\mathbf {y}}) = H({\mathbf {y}}/{\mathbf {x}})$ . Let $H^*({\mathbf {x}}/{\mathbf {y}}) = (H({\mathbf {x}}/{\mathbf {y}}))^*$ denote the . For $k> 0$ , we have the relations (see, e.g., [Reference Alexandrov and ZabrodinAZ13, Eq. (2.4)])

(2.4) $$ \begin{align} [a_{-k}, e^{H(t)}] = -t^k e^{H(t)}, \qquad\qquad [e^{H^*(t)}, a_k] = -t^k e^{H^*(t)}, \end{align} $$

and $[a_k, e^{H(t)}] = [e^{H^*(t)}, a_{-k}] = 0$ . We will also use the

$$\begin{align*}J({\mathbf{x}}/{\mathbf{y}}) := \omega H({\mathbf{x}} / {\mathbf{y}}) = -H(-{\mathbf{x}} / -{\mathbf{y}}). \end{align*}$$

Therefore, we can write

(2.5) $$ \begin{align} e^{H({\mathbf{x}} / {\mathbf{y}})} = \sum_{k=0}^{\infty} H_k(a_1, a_2, \ldots) p_k({\mathbf{x}} / {\mathbf{y}}), \qquad\qquad e^{J({\mathbf{x}} / {\mathbf{y}})} = \sum_{k=0}^{\infty} E_k(a_1, a_2, \ldots) p_k({\mathbf{x}} / {\mathbf{y}}). \end{align} $$

We will consider the spinor representation $\mathcal {F}$ of semi-infinite wedge products subject to a finiteness condition, but we will not present this here in detail (see, e.g., [Reference KacKac90, Reference Kac, Raina and RozhkovskayaKRR13, Reference Miwa, Jimbo and DateMJD00] for more information). This is sometimes referred to as . Instead, we will realize $\mathcal {F}$ as the cyclic $\mathcal {A}$ -representation generated by the that satisfies the relations

$$\begin{align*}\psi_n \lvert 0 \rangle = \psi_m^* \lvert 0 \rangle = 0, \qquad\qquad n < 0, \quad m \geq 0. \end{align*}$$

Therefore, we can describe the basis as the vectors

$$\begin{align*}\psi_{n_1} \psi_{n_2} \cdots \psi_{n_r} \psi_{m_1}^* \psi_{m_2}^* \cdots \psi_{m_s}^* \lvert 0 \rangle, \qquad (r,s \geq 0, n_1> \cdots n_r \geq 0 > m_s > \cdots > m_1). \end{align*}$$

We define the as

$$\begin{align*}\lvert m \rangle = \begin{cases} \psi_{m-1} \dotsm \psi_0 \lvert 0 \rangle & \text{if } m \geq 0, \\ \psi_m^* \dotsm \psi_{-1}^* \lvert 0 \rangle & \text{if } m < 0, \end{cases} \qquad\qquad \langle m \rvert = \begin{cases} \langle 0 \rvert \psi_0^* \dotsm \psi_{m-1}^* & \text{if } m \geq 0, \\ \langle 0 \rvert \psi_{-1} \dotsm \psi_m & \text{if } m < 0. \end{cases} \end{align*}$$

Note that

$$\begin{align*}e^{H({\mathbf{x}}/{\mathbf{y}})} \lvert m \rangle = \lvert m \rangle, \qquad\qquad \langle m \rvert e^{H^*({\mathbf{x}}/{\mathbf{y}})} = \langle m \rvert, \end{align*}$$

for all m. For finitely many (noncommutative) expressions $\Psi _1,\dots ,\Psi _\ell $ , we will use the notation

$$\begin{align*}\prod_{1\leq i\leq \ell}^{\rightarrow} \Psi_i=\Psi_1\Psi_2\cdots \Psi_\ell, \qquad\qquad \prod_{1\leq i\leq \ell}^{\leftarrow} \Psi_i=\Psi_\ell\cdots \Psi_2\Psi_1, \end{align*}$$

to indicate the order of multiplication. We will use the vectors

$$ \begin{align*} \lvert \lambda \rangle^{\sigma}_{[\boldsymbol{\alpha},\boldsymbol{\beta}]} & := \prod^{\rightarrow}_{1 \leq i \leq \ell} \left( e^{-H(A_{\sigma_i-1})} \psi_{\lambda_i-i} e^{H(\beta_i)} e^{H(A_{\sigma_i-1})} \right) \lvert -\ell \rangle, \\ \lvert \lambda \rangle_{\sigma}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]} & := \prod^{\rightarrow}_{1 \leq i \leq \ell} \left( e^{H^*(A_{\sigma_i})} \psi_{\lambda_i-i} e^{-H^*(\beta_i)} e^{-H^*(A_{\sigma_i})} \right) { e^{H^\ast(A_{\sigma_{\ell}})}} \lvert -\ell \rangle, \end{align*} $$

where $\sigma = (\sigma _1, \sigma _2, \dotsc , \sigma _{\ell }) \in \mathbb {Z}_{\geq 0}^{\ell }$ and $A_k = -\boldsymbol {\alpha }_k = (-\alpha _1, \dotsc , - \alpha _k)$ . When $\sigma = \lambda $ , we simply write $\lvert \lambda \rangle _{[\boldsymbol {\alpha },\boldsymbol {\beta }]} := \lvert \lambda \rangle ^{\lambda }_{[\boldsymbol {\alpha },\boldsymbol {\beta }]}$ and $\lvert \lambda \rangle ^{[\boldsymbol {\alpha },\boldsymbol {\beta }]} := \lvert \lambda \rangle _{\lambda }^{[\boldsymbol {\alpha },\boldsymbol {\beta }]}$ . We use the notation

$$\begin{align*}\lvert \lambda \rangle_{[\boldsymbol{\beta}]} := \lvert \lambda \rangle^\sigma_{[0,\boldsymbol{\beta}]}, \qquad\qquad \lvert \lambda \rangle^{[\boldsymbol{\beta}]} := \lvert \lambda \rangle_\sigma^{[0,\boldsymbol{\beta}]}, \qquad\qquad \lvert \lambda \rangle := \lvert \lambda \rangle_{\sigma}^{[0,0]} = \lvert \lambda \rangle^{\sigma}_{[0,0]}. \end{align*}$$

(note these are independent of $\sigma $ ) for brevity. We will typically restrict ourselves to the subspace $\mathcal {F}^0$ , which we describe as the span of either of the bases [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 3.10]

$$\begin{align*}\{\lvert \lambda \rangle_{[\boldsymbol{\alpha},\boldsymbol{\beta}]}\}_{\lambda \in \mathcal{P}}, \qquad\qquad \{\lvert \lambda \rangle^{[\boldsymbol{\alpha},\boldsymbol{\beta}]}\}_{\lambda \in \mathcal{P}}. \end{align*}$$

There is also the dual $\mathcal {A}$ -representation $\mathcal {F}^*$ , which has a canonical bilinear pairing called the that satisfies

$$\begin{align*}\langle k | m \rangle = \delta_{km}, \qquad\qquad (\langle w \rvert X) \lvert v \rangle = \langle w \rvert (X \lvert v \rangle), \end{align*}$$

for all $k,m \in \mathbb {Z}$ , $X \in \mathcal {A}$ , $\langle w \rvert \in \mathcal {F}^*$ , and $\lvert v \rangle \in \mathcal {F}$ . We abbreviate $\langle X \rangle = \langle 0 \rvert X \lvert 0 \rangle $ , and note that $\lvert k \rangle ^* = \langle k \rvert $ . Define

$$\begin{align*}{}_{[\boldsymbol{\alpha},\boldsymbol{\beta}]}\langle \lambda \rvert_{\sigma} = (\lvert \lambda \rangle_{\sigma}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]})^*, \qquad\qquad {}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]}\langle \lambda \rvert^{\sigma} = (\lvert \lambda \rangle^{\sigma}_{[\boldsymbol{\alpha},\boldsymbol{\beta}]})^*, \end{align*}$$

and similar abbreviations as above. With respect to this inner product, we have the orthonormal bases [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 3.10]

(2.6) $$ \begin{align} {}_{[\boldsymbol{\alpha},\boldsymbol{\beta}]}\langle \lambda | \mu \rangle_{[\boldsymbol{\alpha},\boldsymbol{\beta}]} = {}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]}\langle \lambda | \mu \rangle^{[\boldsymbol{\alpha},\boldsymbol{\beta}]} = \delta_{\lambda\mu}. \end{align} $$

Moreover, there is an isomorphism from $\mathcal {F}^0$ to symmetric functions defined by $\lvert v \rangle \mapsto \langle 0 \rvert e^{H({\mathbf {x}}/{\mathbf {y}})} \lvert v \rangle $ , which satisfies [Reference Iwao, Motegi and ScrimshawIMS24, Cor. 4.2, Eq. (4.1)]

(2.7) $$ \begin{align} G_{\lambda /\!\!/ \mu}({\mathbf{x}}; \boldsymbol{\alpha}, \boldsymbol{\beta}) = {}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]} \langle \mu \rvert e^{H({\mathbf{x}})} \lvert \lambda \rangle^{[\boldsymbol{\alpha},\boldsymbol{\beta}]}, \qquad\qquad g_{\lambda / \mu}({\mathbf{x}}; \boldsymbol{\alpha}, \boldsymbol{\beta}) = {}_{[\boldsymbol{\alpha},\boldsymbol{\beta}]} \langle \mu \rvert e^{H({\mathbf{x}})} \lvert \lambda \rangle_{[\boldsymbol{\alpha},\boldsymbol{\beta}]}. \end{align} $$

From [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 4.3], we also have

(2.8) $$ \begin{align} G_{\lambda' /\!\!/ \mu'}({\mathbf{x}}; \boldsymbol{\alpha}, \boldsymbol{\beta}) = {}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]} \langle \mu \rvert e^{J({\mathbf{x}})} \lvert \lambda \rangle^{[\boldsymbol{\alpha},\boldsymbol{\beta}]}, \qquad\qquad g_{\lambda' / \mu'}({\mathbf{x}}; \boldsymbol{\alpha}, \boldsymbol{\beta}) = {}_{[\boldsymbol{\alpha},\boldsymbol{\beta}]} \langle \mu \rvert e^{J({\mathbf{x}})} \lvert \lambda \rangle_{[\boldsymbol{\alpha},\boldsymbol{\beta}]}. \end{align} $$

2.4 Particle processes

We describe the four different versions of the discrete (TASEP) given in [Reference Dieker and WarrenDW08]. All of these processes will be considered using a bosonic presentation following [Reference Dieker and WarrenDW08], where they are given by particles labeled $1, \dotsc , \ell $ that can occupy the same sites on the lattice $\mathbb {Z}_{\geq 0}$ . The position of these particles will remain in order and only move to the right (i.e., increase the value of their position), and so we can index states by partitions $\lambda $ , where $\lambda _i$ corresponds to the position of the i-th particle. To obtain a fermionic presentation and justifying calling this TASEP, simply move the i-th particle from position $\lambda _i$ to $\lambda _i - i$ , and hence, we can identify the shape $\lambda = \emptyset $ with the step initial condition:

Unless otherwise noted, we will consider our particle configurations using the bosonic description.

Example 2.6. We identify $\lambda =(3,3,1,0)$ with the (bosonic) particle distribution

with the first and second particles in position $3$ , the third particle is in position 1, and the fourth particle is in position $0$ . In terms of the fermionic presentation, we have

The j-th particle at time i will attempt to move $w_{ji}$ (which we take as a random variable) steps to the right according to either the geometric or Bernoulli distribution

$$ \begin{gather*} \mathsf{P}_{Ge}(w_{ji} = k) = (1 - \pi_j x_i) (\pi_j x_i)^k \quad (k \in \mathbb{Z}_{\geq 0}), \\ \mathsf{P}_{Be}(w_{ji} = 1) = \frac{\rho_i x_j}{1 + \rho_j x_i} \quad \text{and} \quad \mathsf{P}_{Be}(w_{ji} = 0) = (1 + \rho_j x_i)^{-1}, \end{gather*} $$

respectively, where $\pi _j x_i \in (0, 1)$ and $\rho _j x_i> 0$ . We can assemble all of these random variables, ranging over all particles and time, into a (random) matrix $W = [w_{ji}]_{i,j}$ . Since we are working in discrete time, we need a rule to determine the behavior when two particles decide to move simultaneously that results in a conflict. There are two natural ways to resolve this.

  • The larger particle pushes the smaller particle.

  • The smaller particle blocks the larger particle.

For the geometric (resp. Bernoulli) distribution, we will update the particles from largest-to-smallest (resp. smallest-to-largest). As such, we obtain four different variations of discrete TASEP, which we organize using the same indexing as [Reference Dieker and WarrenDW08]:

  1. 1. Geometric distribution with pushing behavior.

  2. 2. Bernoulli distribution with blocking behavior.

  3. 3. Geometric distribution with blocking behavior.

  4. 4. Bernoulli distribution with pushing behavior.

Table 2 provides a summary of these cases and Figure 1 gives an example of the blocking versus pushing behavior. We also give a recursive description of these processes following (1.2). Here we give the equivalent bosonic version. We introduce $ \Omega _\ell ^b:=\{ (z_1,z_2,\dots ,z_\ell ) \in \mathbb {Z}_{ \geq 0}^\ell : z_1 \geq z_2 \geq \cdots \geq z_\ell \} $ for $\ell \ge 2$ . We introduce four types of evolution of particles $Y_t^{\mathrm {A}}$ , $Y_t^{\mathrm {B}}$ , $Y_t^{\mathrm {C}}$ and $Y_t^{\mathrm {D}}$ on $\Omega _\ell ^b$ .

Table 2 Summary of the four cases of discrete TASEP that we consider in this paper.

Figure 1 Examples of the third particle making a jump of $6$ steps with the pushing (left) and blocking (right) behaviors.

Definition 2.7. The four particle processes whose positions at time $t+1$ are given recursively as

$$ \begin{align*} Y_{t+1}^{\mathrm{A}}(\ell)&=Y_t^{\mathrm{A}}(\ell)+\xi^{\mathrm{A}}(\ell,t+1), \ \ \ Y_{t+1}^{\mathrm{B}}(1)=Y_t^{\mathrm{B}}(1)+\xi^{\mathrm{B}}(1,t+1), \nonumber \\ Y_{t+1}^{\mathrm{C}}(1)&=Y_t^{\mathrm{C}}(1)+\xi^{\mathrm{C}}(1,t+1), \ \ \ Y_{t+1}^{\mathrm{D}}(\ell)=Y_t^{\mathrm{D}}(\ell)+\xi^{\mathrm{D}}(\ell,t+1),\end{align*} $$

and

(2.9a) $$ \begin{align} Y_{t+1}^{\mathrm{A}}(k) &= \max(Y_t^{\mathrm{A}}(k), Y_{t+1}^{\mathrm{A}}(k+1))+\xi^{\mathrm{A}}(k,t+1), \end{align} $$
(2.9b) $$ \begin{align} Y_{t+1}^{\mathrm{B}}(k) &= \min(Y_t^{\mathrm{B}}(k)+\xi^{\mathrm{B}}(k,t+1) ,Y_{t+1}^{\mathrm{B}}(k-1)),\end{align} $$
(2.9c) $$ \begin{align}Y_{t+1}^{\mathrm{C}}(k) &= \min(Y_t^{\mathrm{C}}(k)+\xi^{\mathrm{C}}(k,t+1),Y_{t}^{\mathrm{C}}(k-1)), \end{align} $$
(2.9d) $$ \begin{align} Y_{t+1}^{\mathrm{D}}(k) &= \max(Y_t^{\mathrm{D}}(k)+\xi^{\mathrm{D}}(k,t+1) ,Y_{t+1}^{\mathrm{D}}(k+1)),\end{align} $$

for $k=1,\dots ,\ell -1$ for Case (A), (D) and $k=2,\dots ,\ell $ for Case (B), (C), where $\xi ^{\mathrm {D}}(k,t+1)$ and $\xi ^{\mathrm {C}}(k,t+1)$ (resp/ $\xi ^{\mathrm {B}}(k,t+1)$ and $\xi ^{\mathrm {D}}(k,t+1)$ ) are random variables having distribution $\mathsf {P}_{Ge}$ (resp. $\mathsf {P}_{Be}$ ).

For a sequence of integers $\lambda =(\lambda _1,\lambda _2,\dots ,\lambda _\ell ) (\lambda _1 \ge \lambda _2 \ge \cdots \ge \lambda _\ell )$ , we write $Y=\lambda $ if $Y \in \Omega _\ell ^b$ satisfies $Y(j)=\lambda _j$ , $j=1,2,\dots ,\ell $ .

The bosonic version of Theorem 1.1 is that the transition probabilities of the four particle systems $Y=Y^{\mathrm {A}},Y^{\mathrm {B}}, Y^{\mathrm {C}}, Y^{\mathrm {D}}$ from the initial configuration $Y_0=\mu $ at time zero to the final configuration $Y_n=\lambda $ at time n are given by

$$ \begin{align*} \mathsf{P}(Y^{\mathrm{A}}_n=\lambda|Y^{\mathrm{A}}_0=\mu) & =\mathsf{P}_{A,n}(\lambda | \mu), \ \mathsf{P}(Y^{\mathrm{B}}_n=\lambda|Y^{\mathrm{B}}_0=\mu) =\mathsf{P}_{B,n}(\lambda | \mu), \\ \mathsf{P}(Y^{\mathrm{C}}_n=\lambda|Y^{\mathrm{C}}_0=\mu) & =\mathsf{P}_{C,n}(\lambda | \mu), \ \mathsf{P}(Y^{\mathrm{D}}_n=\lambda|Y^{\mathrm{D}}_0=\mu) =\mathsf{P}_{D,n}(\lambda | \mu). \end{align*} $$

Recall that $\mathsf {P}_{A,n}(\lambda | \mu )$ , $\mathsf {P}_{B,n}(\lambda | \mu )$ , $\mathsf {P}_{C,n}(\lambda | \mu )$ , $\mathsf {P}_{D,n}(\lambda | \mu )$ are Grothendieck polynomials with overall factors multiplied, explicitly given in Theorem 1.1. By abuse of notation, we also denote transition probabilities using these notations. In Section 4, where we give a proof, we regard these notations as transition probabilities.

Let us give an equivalent two-dimensional description. Let $G(j, i)$ denote the position of the j-th particle at time i in the bosonic formulation. For the pushing behavior, we can further realize it as a directed last-passage percolation model on W. To see the last-passage percolation, define

$$\begin{align*}G(k, n) = \max_{\Pi} \sum_{(j,i) \in \Pi} w_{ji}, \qquad\qquad \mathbf{G}(n) = \big(G(1, n), G(2, n), \dotsc, G(\ell, n) \big), \end{align*}$$

where the maximum is taken over a certain set of paths from $(\ell , 1)$ to $(k, n)$ . The paths are given in the natural matrix coordinates $(r, c)$ being the r-th row and c-th column. For the geometric distribution, the paths use unit steps to the right or up; that is, starting at position $(j_a, i_a)$ , then either $(j_{a+1}, i_{a+1}) = (j_a, i_a+1), (j_a-1, i_a)$ . Translating this into the update rule described in [Reference Dieker and WarrenDW08] (cf. (2.9a)), we have

$$\begin{align*}G(j,i) = \max(G(j,i-1), G(j+1,i)) + w_{ji}. \end{align*}$$

Note $G(j,i)$ corresponds to $Y_i(j)$ and $w_{ji}$ corresponds to $\xi (j,i)$ in the previous bosonic particle process description.

On the other hand, the Bernoulli distribution uses paths such that for every time $w_{j_a i_a} = 1$ , we must have the next step move right $(j_{a+1}, i_{a+1}) = (j_a, i_a+1)$ , and we also allow paths to end at $(k', n)$ for some $k \leq k' \leq \ell $ . Then from a simple argument using conditional probability, we see that the position of the particles at time n is given by $\mathbf {G}(n)$ (see, e.g., [Reference Dieker and WarrenDW08, Reference JohanssonJoh00, Reference Motegi and ScrimshawMS25]). To obtain the positions of the particles for the blocking behavior, this becomes a first-passage percolation model as we replace $\max $ by $\min $ , but we have to make some other changes. The first is that we shift the indices so that the $w_{ji}$ correspond to the weights on the horizontal edges $(i,j-1) \to (i, j)$ with the other edges being SE diagonal (resp. vertical) edges with weight $0$ for the geometric (resp. Bernoulli) case. The other is that we instead start at $(0, 0)$ .

Example 2.8. Consider $\ell = 3$ and $n = 4$ . Then the correspondence between a random matrix and the motion of particles in Case A, resp. Case C, is given by

First note that all of these transition probabilities $\mathsf {P}_X(\lambda | \mu ) := \mathsf {P}_X(\mathbf {G}(1) = \lambda | \mathbf {G}(0) = \mu )$ in Case X are independent of the value of $\ell $ provided $\ell (\lambda ) \leq \ell $ . Indeed, the j-th particle for $j> \ell (\lambda )$ must be fixed in place, which occurs with probability $1$ . Note that by taking $\ell \to \infty $ , we can effectively ignore the value of $\ell $ if desired and identify states with elements of the fermionic Fock space $\mathcal {F}$ (considered with the shifted positions $\lambda _j - j$ ). Furthermore, the step initial condition becomes $\lvert 0 \rangle $ , and this is sometimes known as the Dirac sea.

3 Schur operators for canonical Grothendieck polynomials

We begin by briefly reviewing the Fomin–Greene theory of symmetric functions in noncommutative variables. Let $\mathbf {k} [\mathcal {P}]$ denote the $\mathbf {k}$ -module with basis indexed by $\mathcal {P}$ , the set of all partitions.

3.1 Noncommutative blocking operators

We denote by $\kappa _i \colon \mathbf {k}[\mathcal {P}] \to \mathbf {k}[\mathcal {P}]$ the i-th (row) that adds a box to the i-th row of a partition $\lambda $ if $\lambda _i < \lambda _{i-1}$ (that is, we can add the box and obtain a partition) and is $0$ otherwise. The Schur operators satisfy the Knuth relations.

We define the linear operator $U_i^{(\boldsymbol {\alpha }, \boldsymbol {\beta })}$ by

$$\begin{align*}U_i^{(\boldsymbol{\alpha}, \boldsymbol{\beta})} := \kappa_i + \Theta_i,\quad \text{where}\quad \Theta_i \cdot \lambda := \begin{cases} -\alpha_{\lambda_i} \lambda & \text{if } \lambda_i < \lambda_{i-1}, \\ \beta_{i-1} \lambda & \text{if } \lambda_i = \lambda_{i-1}, \end{cases} \end{align*}$$

for any $\lambda \in \mathcal {P}$ . We consider $\lambda _0 = \infty $ and $\alpha _0 = 0$ (although our proofs could have $\alpha _0$ be an arbitrary parameter). When there is no ambiguity in the parameters, we will simply write $U_i := U_i^{(\boldsymbol {\alpha },\boldsymbol {\beta })}$ . When $\boldsymbol {\alpha } = 0$ and $\boldsymbol {\beta } = \beta $ , the operators $\{U_i^{(0, \beta )}\}_{i=1}^{\infty }$ are the operators introduced in [Reference IwaoIwa20, Sec. 6].

Lemma 3.1. The operators $\mathbf {U} = \{U_i\}_{i=1}^{\infty }$ satisfy the weak Knuth relations.

Proof. Since $[\Theta _i,\Theta _j]=0$ for any $i,j$ and

(3.1) $$ \begin{align} [\kappa_i,\Theta_j]=0\quad \text{for } i\neq j-1,j, \end{align} $$

the “nonlocal commutativity” $U_iU_j=U_jU_i$ for $|i-j|\geq 2$ immediately follows from $\kappa _i\kappa _j=\kappa _j\kappa _i$ for $|i-j|\geq 2$ . This fact implies (2.2a) and (2.2b).

Define a linear operator $T_i \colon \mathbf {k}[\mathcal {P}] \to \mathbf {k}[\mathcal {P}]$ by

$$\begin{align*}T_i\cdot \lambda:= \begin{cases} 0 & \text{if } \lambda_{i+1} < \lambda_{i}, \\ \kappa_i\cdot \lambda & \text{if } \lambda_{i+1} = \lambda_{i} \end{cases}\qquad \text{for any } \lambda\in\mathcal{P}. \end{align*}$$

Note that, for any partition $\lambda $ , $T_i\cdot \lambda \neq 0$ if and only if $\lambda _{i+1}=\lambda _i<\lambda _{i-1}$ . We need the following commutation relations to prove (2.2c):

(3.2a) $$ \begin{align} T_i \kappa_i & = \kappa_{i+1}^2T_i=0, \end{align} $$
(3.2b) $$ \begin{align} \Theta_{i+1} \kappa_{i+1}T_i &= \beta_i \kappa_{i+1}T_i. \end{align} $$
(3.2c) $$ \begin{align} [\kappa_i, \kappa_{i+1}] & = -\kappa_{i+1}T_i, \end{align} $$
(3.2d) $$ \begin{align} \Theta_{i+1} T_i & = T_i\Theta_i, \end{align} $$
(3.2e) $$ \begin{align} [\kappa_i, \Theta_{i+1}] & = \beta_i T_i - \Theta_{i+1}T_i. \end{align} $$

Equation (3.2a) follows from the facts that (i) $\mu :=\kappa _i\cdot \lambda $ satisfies $\mu _{i+1}<\mu _{i}$ whenever $\mu \neq 0$ and that (ii) $\nu :=T_i\cdot \lambda $ satisfies $\nu _{i}=\nu _{i+1}+1$ whenever $\nu \neq 0$ for any $\lambda \in \mathcal {P}$ . Equation (3.2b) follows from the fact that $\zeta :=\kappa _{i+1}T_i\cdot \lambda $ satisfies $\zeta _{i+1}=\zeta _i$ whenever $\zeta \neq 0$ . Equation (3.2c) is proved by noting that $\lambda _{i+1}<\lambda _{i}\Rightarrow \zeta = [\kappa _i,\kappa _{i+1}]\cdot \lambda = 0$ and $\lambda _{i+1}=\lambda _{i}\Rightarrow \zeta =[\kappa _i,\kappa _{i+1}]\cdot \lambda =-\kappa _{i+1}\kappa _i\cdot \lambda $ . To prove (3.2d), it suffices to check that $\Theta _{i+1}T_i\cdot \lambda =T_i\Theta _{i}\cdot \lambda =\alpha _{\lambda _{i}}\kappa _i\cdot \lambda $ whenever $T_i\cdot \lambda \neq 0$ . To prove (3.2e), we consider the following three cases: (a) If $T_i\cdot \lambda \neq 0$ ( $\Leftrightarrow \lambda _{i+1}=\lambda _i<\lambda _{i-1} \Leftrightarrow T_i\cdot \lambda =\kappa _i\cdot \lambda \neq 0$ ), we have $\kappa _i\Theta _{i+1}\cdot \lambda =\beta _i\kappa _i\cdot \lambda =\beta _iT_i\cdot \lambda $ . (b) If $\lambda _{i+1}<\lambda _i$ , we have $\kappa _i\Theta _{i+1}\cdot \lambda =\Theta _{i+1}\kappa _i\cdot \lambda =\beta _i\kappa _i\cdot \lambda $ . (c) If $\lambda _{i}=\lambda _{i-1}$ , we have $\kappa _i\Theta _{i+1}\cdot \lambda =\Theta _{i+1}\kappa _i\cdot \lambda =0$ . In each case, (3.2e) holds.

Thus we have

$$ \begin{align*} [U_i,U_{i+1}]=[\kappa_i+\Theta_i,\kappa_{i+1}+\Theta_{i+1}] \stackrel{({\scriptstyle 3.1})}{=} [\kappa_i,\kappa_{i+1}]+[\kappa_{i},\Theta_{i+1}] \stackrel{({\scriptstyle 3.2c}),({\scriptstyle 3.2e})}{=}(\beta_i-U_{i+1})T_i, \end{align*} $$
$$ \begin{align*} [U_i,U_{i+1}]U_i =(\beta_i-U_{i+1})T_iU_i \stackrel{({\scriptstyle 3.2a})}{=}(\beta_i-U_{i+1})T_i\Theta_i \stackrel{({\scriptstyle 3.2d})}{=}(\beta_i-U_{i+1})\Theta_{i+1}T_i. \end{align*} $$

and

$$ \begin{align*} U_{i+1}[U_{i},U_{i+1}] =U_{i+1}(\beta_i-U_{i+1})T_i =(\beta_i-U_{i+1})U_{i+1}T_i =(\beta_i-U_{i+1})(\kappa_{i+1}+\Theta_{i+1})T_i. \end{align*} $$

Therefore, we obtain

$$ \begin{align*} [U_i+U_{i+1},U_{i+1}U_i] &=[U_i,U_{i+1}]U_i-U_{i+1}[U_{i},U_{i+1}] =(\beta_i-U_{i+1})\kappa_{i+1}T_i\\ & \stackrel{({\scriptstyle 3.2a})}{=}(\beta_i-\Theta_{i+1})\kappa_{i+1}T_i \stackrel{({\scriptstyle 3.2b})}{=}0, \end{align*} $$

which implies (2.2c).

Theorem 3.2. We have

(3.3) $$ \begin{align} {}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]} \langle \lambda \rvert S_{\mu}(a_1, a_2, \ldots) = {}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]} \langle s_{\mu}(\mathbf{U}/\boldsymbol{\beta}) \cdot \lambda \rvert. \end{align} $$

To prove Theorem 3.2, we need the following computations. For simplicity, let $v(\lambda; \sigma ) := {}^{[\boldsymbol {\alpha },\boldsymbol {\beta }]}\langle \lambda \rvert ^{\sigma }$ . Let $\epsilon _k$ be the k-th standard basis vector in $\mathbb {Z}^N$ for some $N \gg 1$ . For any subset $K = \{k_1. \dotsc , k_m \} \subseteq [N]$ , let $\epsilon _K = \epsilon _{k_1} + \cdots + \epsilon _{k_m}$ .

Lemma 3.3. For any sequences $\lambda $ and $\sigma $ (not necessarily partitions), we have

$$\begin{align*}v(\lambda + \epsilon_k; \sigma) = v(\lambda + \epsilon_k; \sigma + \epsilon_k) - \alpha_{\sigma_k} v(\lambda; \sigma). \end{align*}$$

Proof. This follows applying to the definition of $v(\lambda + \epsilon _k; \sigma )$ the relation

$$\begin{align*}\psi_{\lambda_k +1 - k}^* e^{H^*(\gamma)} = e^{H^*(\gamma)} \psi_{\lambda_k + 1 - k}^* + \gamma \psi^*_{\lambda_k - k} e^{H^*(\gamma)}, \end{align*}$$

for $\gamma = -\alpha _{\sigma _k}$ , which follows from the $\ast $ version of Equation (2.3a).

Lemma 3.4. Suppose $\lambda _k = \lambda _{k-1}$ and $\sigma _k = \sigma _{k-1}$ , then

$$\begin{align*}v(\lambda + \epsilon_k; \sigma) = \beta_{k-1} v(\lambda; \sigma). \end{align*}$$

Proof. This follows from the definition of $v(\lambda; \sigma )$ and the rectification lemma [Reference Iwao, Motegi and ScrimshawIMS24, Lemma 3.6]. Indeed, similar to the previous lemma, we have

$$\begin{align*}\psi^*_{\lambda_k - (k-1)} e^{-H^*(\beta_{k-1})} \psi^*_{\lambda_k + 1 - k} = \beta_{k-1} \psi^*_{\lambda_k - (k-1)} e^{-H^*(\beta_{k-1})} \psi^*_{\lambda_k - k} \end{align*}$$

by the $\ast $ version of Equation (2.3a) and recalling $(\psi ^*_{\lambda _k - k})^2 = 0$ .

Lemma 3.5. Let $\lambda $ be a partition. If $K = \{k_1 < k_2 < \dotsc < k_m\}$ , then

$$\begin{align*}{}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]} \langle \lambda + \epsilon_K \rvert_{\lambda} = \sum_{\mu} C_{\lambda}^{\mu} \cdot {}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]} \langle \mu \rvert, \end{align*}$$

where the coefficients $C_{\lambda }^{\mu }$ are given by

$$\begin{align*}U_{k_n} \dotsm U_{k_2} U_{k_1} \cdot \lambda = \sum_{\mu} C_{\lambda}^{\mu} \cdot \mu. \end{align*}$$

Proof. This follows by repeatedly applying Lemma 3.3 and Lemma 3.4 and noting that these two cases precisely encode the action of the operator $U_k$ .

Proof of Theorem 3.2.

Because the operators $\{U_i\}_{i=1}^{\infty }$ satisfy the weak Knuth relations (Lemma 3.1), Fomin-Greene’s theorem [Reference Fomin and GreeneFG98] implies that the noncommutative Schur functions $s_{\lambda }(\mathbf {U} / \boldsymbol {\beta })$ are a commutative algebra (under the natural multiplication) as it is generated (as an algebra) by $e_{i}(\mathbf {U} / \boldsymbol {\beta })$ for $i=0,1,2,\dots $ . Therefore, to prove the theorem, it suffices to show (3.3) for the case when $s_\lambda =e_i$ :

$$\begin{align*}{}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]}\langle \lambda \rvert E_i(a_1, a_2, \ldots) = {}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]}\langle e_{i}(\mathbf{U} / \boldsymbol{\beta}) \cdot \lambda \rvert. \end{align*}$$

By applying the commutator relation in (2.4), we have

$$\begin{align*}{}^{[\boldsymbol{\alpha}, \boldsymbol{\beta}]}\langle \lambda \rvert a_i = \sum_{k=1}^{\infty} {}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]}\langle \lambda + i \epsilon_k \rvert_{\lambda} - \sum_{k=1}^{\infty} \beta_k^i \cdot {}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]}\langle \lambda \rvert = \sum_{k=1}^{\infty} {}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]}\langle \lambda + i \epsilon_k \rvert_{\lambda} - p_i(\boldsymbol{\beta}) \cdot {}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]}\langle \lambda \rvert. \end{align*}$$

Next, write $E_i(p_1, p_2, \ldots; \boldsymbol {\beta }) = e_i({\mathbf {x}} \sqcup \boldsymbol {\beta })$ , and by noting the above equation comes from $p_i({\mathbf {x}} \sqcup \boldsymbol {\beta })$ under the identification $p_i({\mathbf {x}}) \equiv a_i$ , we have

(3.4) $$ \begin{align} {}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]}\langle \lambda \rvert E_i(a_1, a_2, \ldots; \boldsymbol{\beta}) = \sum_{\substack{K \subseteq \mathbb{Z} \\ \left\lvert K \right\rvert = i}} {}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]}\langle \lambda + \epsilon_K \rvert_{\lambda} = {}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]}\langle e_i(\mathbf{U}) \cdot \lambda \rvert, \end{align} $$

where the last equality is by Lemma 3.5. By expanding the plethsym for the left-hand side and solving for ${}^{[\boldsymbol {\alpha },\boldsymbol {\beta }]}\langle \lambda \rvert E_i(a_1, a_2, \ldots )$ , we obtain the desired result.

3.2 Noncommutative pushing operators

We define an operator $u_j^{(\boldsymbol {\alpha },\boldsymbol {\beta })}$ recursively as follows. Consider a partition $\lambda $ , and let k be minimal such that $\lambda _k = \lambda _j$ . Let $\nu := \overline {\mu + \epsilon _j}$ be the smallest partition that contains $\mu + \epsilon _j$ (a box added to row j); that is, we have added a box to all rows $k \leq i \leq j$ . Then

(3.5) $$ \begin{align} u_j^{(\boldsymbol{\alpha},\boldsymbol{\beta})} \cdot \mu = \beta_k \cdots \beta_{j-1} \nu + \alpha_{\mu_j+1} \sum_{i=k}^j \prod_{a=k}^{i-1} (\alpha_{\mu_j+1} + \beta_a) \prod_{a=i}^{j-1} \beta_a u_i^{(\boldsymbol{\alpha},\boldsymbol{\beta})} \cdot \nu. \end{align} $$

Note that the result is well-defined in the completion (by the degree) $\mathbf {k} [\![ \mathcal {P} ]\!]$ since each partition only has finitely many contributions. As we will always be looking for a specific term in this sum, working in the completion will not be consequential. We will show the operators $\mathbf {u}^{(\boldsymbol {\alpha },\boldsymbol {\beta })} = (u_1^{(\boldsymbol {\alpha },\boldsymbol {\beta })}, u_2^{(\boldsymbol {\alpha },\boldsymbol {\beta })}, \ldots )$ correspond to separating out the action of the current operator $a_1$ on ${}_{[\boldsymbol {\alpha },\boldsymbol {\beta }]} \langle \mu \rvert $ .

Lemma 3.6. For any sequences $\mu $ and $\sigma $ (not necessarily a partition), we have

(3.6) $$ \begin{align} {}_{[\boldsymbol{\alpha},\boldsymbol{\beta}]} \langle \mu \rvert_{\sigma} = {}_{[\boldsymbol{\alpha},\boldsymbol{\beta}]}\langle \mu \rvert_{\sigma + \epsilon_j} + \alpha_{\sigma_j+1} {}_{[\boldsymbol{\alpha},\boldsymbol{\beta}]}\langle \mu + \epsilon_j \rvert_{\sigma + \epsilon_j}. \end{align} $$

Proof. We rewrite (2.3b) as

$$\begin{align*}\psi_{\mu_j-j}^* = e^{-H(-\alpha_{\sigma_j+1})} \psi^*_{\mu_j-j} e^{H(-\alpha_{\sigma_j+1})} + \alpha_{\sigma_j+1} e^{-H(-\alpha_{\sigma_j+1})} \psi^*_{\mu_j+1-j} e^{H(-\alpha_{\sigma_j+1})}, \end{align*}$$

and so the claim follows.

Lemma 3.7. For any sequences $\mu $ and $\sigma $ such that $\mu _j = \mu _{j-1}$ and $\sigma _j = \sigma _{j-1}$ , we have

(3.7) $$ \begin{align} {}_{[\boldsymbol{\alpha},\boldsymbol{\beta}]} \langle \mu + \epsilon_j \rvert_{\sigma} = \beta_{j-1} \cdot {}_{[\boldsymbol{\alpha},\boldsymbol{\beta}]}\langle \mu + \epsilon_{j-1} + \epsilon_j \rvert_{\sigma}. \end{align} $$

Proof. The claim follows from (the star version of) the rectification lemma [Reference Iwao, Motegi and ScrimshawIMS24, Lemma 3.6].

Lemma 3.8. We have

$$\begin{align*}{}_{[\boldsymbol{\alpha}, \boldsymbol{\beta}]} \langle \mu + \epsilon_j \rvert = {}_{[\boldsymbol{\alpha}, \boldsymbol{\beta}]} \langle u_j^{(\boldsymbol{\alpha},\boldsymbol{\beta})} \cdot \mu \rvert. \end{align*}$$

Proof. Consider $k \leq j$ minimal such that $\mu _j = \mu _k$ and $\sigma _j = \sigma _m$ for all $k \leq m \leq j$ . Recall that for $X = \{i_1, \dotsc , i_m\}$ , we set $\epsilon _X = \epsilon _{i_1} + \cdots + \epsilon _{i_m}$ . Let $\nu = \mu + \epsilon _{[j,k]} = \overline {\mu + \epsilon _j}$ and $v(\mu; \sigma ) := {}_{[\boldsymbol {\alpha },\boldsymbol {\beta }]} \langle \mu \rvert _{\sigma }$ . By repeated applications of (3.7), we have

$$\begin{align*}v(\mu + \epsilon_j; \mu) = \prod_{i=k}^{j-1} \beta_i v(\nu; \mu). \end{align*}$$

Then by applying (3.6) to each $i \in [k, j]$ and then using (3.7), we compute

$$ \begin{align*} v(\mu + \epsilon_j; \mu) & = \prod_{i=k}^{j-1} \beta_i \sum_{X \subseteq [k,j]} \alpha_{\mu_j+1}^{\left\lvert X \right\rvert} v(\nu + \epsilon_X; \nu) \\ & = \prod_{i=k}^{j-1} \beta_i \sum_{X \subseteq [k,j]} \alpha_{\mu_j+1}^{\left\lvert X \right\rvert} \prod_{i \in [k,j) \setminus X} \beta_i v(\overline{\nu + \epsilon_X}; \nu) \\ & = \prod_{i=k}^{j-1} \beta_i \cdot \nu + \prod_{a=k}^{j-1} \beta_a \cdot \alpha_{\mu_j+1} \sum_{i=k}^j \prod_{m=k}^{i-1} (\alpha_{\mu_j+1} + \beta_m) v(\overline{\nu + \epsilon_i}; \nu) \\ & = \prod_{i=k}^{j-1} \beta_i \cdot \nu + \alpha_{\mu_j+1} \sum_{i=k}^j \prod_{m=k}^{i-1} (\alpha_{\mu_j+1} + \beta_m) \prod_{a=i}^{j-1} \beta_a v(\nu + \epsilon_i; \nu). \end{align*} $$

However, this is precisely the recursion formula defining $u_j^{(\boldsymbol {\alpha },\boldsymbol {\beta })}$ , and the claim follows.

For brevity, we will simply write $u_j := u_j^{(\boldsymbol {\alpha },\boldsymbol {\beta })}$ until noted otherwise.

Remark 3.9. Another way to see $\mathbf {u}$ corresponds to the action of the current operator $a_i$ is to first note that $a_1 = \frac {d}{dt} [e^{H(t)}] \big \rvert _{t=0}$ . Therefore, in the expansion of ${}_{[\boldsymbol {\alpha },\boldsymbol {\beta }]} \langle \mu \rvert a_1$ , we can compute the coefficient of ${}_{[\boldsymbol {\alpha },\boldsymbol {\beta }]} \langle \lambda \rvert $ by using (2.6) and computing

$$\begin{align*}{}_{[\boldsymbol{\alpha},\boldsymbol{\beta}]} \langle \mu \rvert a_1 \lvert \lambda \rangle_{[\boldsymbol{\alpha},\boldsymbol{\beta}]} = \frac{d}{dt} \left[ {}_{[\boldsymbol{\alpha},\boldsymbol{\beta}]} \langle \mu \rvert e^{H(t)} \lvert \lambda \rangle_{[\boldsymbol{\alpha},\boldsymbol{\beta}]} \right] \Bigr\rvert_{t=0} = \frac{d}{dt} [ g_{\lambda/\mu}(t; \boldsymbol{\alpha}, \boldsymbol{\beta}) ] \Bigr\rvert_{t=0}. \end{align*}$$

We can give a precise formula by using the combinatorial description given in [Reference Hwang, Jang, Soo Kim, Song and SongHJK+25, Def. 4.1] as a single marked reverse plane partition. Indeed, in order for there to be a nonzero contribution, we can only have a single connected component such that the topmost-rightmost box contributes a t. We leave the details to the interested reader.

The $\mathbf {u}^{(\boldsymbol {\alpha },\boldsymbol {\beta })}$ operators do not satisfy the (weak) Knuth relations as demonstrated in the arXiv version of this paper [Reference Iwao, Motegi and ScrimshawIMS23, Ex.-3.11]. However, we can see that $\mathbf {u}^{(0,\boldsymbol {\beta })}$ do, and in fact, this holds in general. This is a straightforward direct computation that refines [Reference IwaoIwa22, Sec. 6.1].

Lemma 3.10. The operators $\mathbf {u}^{(0,\boldsymbol {\beta })}$ satisfy the Knuth relations.

Theorem 3.11. Recall ${}_{[\boldsymbol {\beta }]} \langle \lambda \rvert = {}_{[0,\boldsymbol {\beta }]} \langle \lambda \rvert $ . We have

$$\begin{align*}{}_{[\boldsymbol{\beta}]} \langle \lambda \rvert S_{\mu}(a_1, a_2, \ldots) \equiv {}_{[\boldsymbol{\beta}]} \langle s_{\mu}(\mathbf{u}^{(0,\boldsymbol{\beta})}) \cdot \lambda \rvert\quad \mod{M_{[\boldsymbol{\beta}]}^{\ell\perp}}. \end{align*}$$

Proof. Similarly to the proof of Theorem 3.2, it suffices to prove the theorem for the case when $s_\lambda =e_i$ ( $i=0,1,2,\dots $ ):

$$\begin{align*}{}_{[\boldsymbol{\beta}]}\langle \lambda \rvert E_i(a_1, a_2, \ldots) = {}_{[\boldsymbol{\beta}]} \langle e_i(\mathbf{u}^{(0,\boldsymbol{\beta})}) \cdot \lambda \rvert. \end{align*}$$

To do so, we first compute

$$\begin{align*}{}_{[\boldsymbol{\beta}]}\langle \lambda \rvert a_i = {}_{[\boldsymbol{\beta}]}\langle \lambda \rvert P_i(a_1, a_2, \ldots) \equiv \sum_{j=1}^{\ell} v(\lambda + i \epsilon_j, \lambda)\quad \mod{M_{[\boldsymbol{\beta}]}^{\ell\perp}} \end{align*}$$

from the fact $[e^{H(\gamma )}, a_i] = 0$ for all $i> 0$ . Hence, Lemma 3.8 implies

$$\begin{align*}{}_{[\boldsymbol{\beta}]}\langle \lambda \rvert E_i(a_1, a_2, \ldots) = \sum_{j_1 < \cdots < j_i} {}_{[\boldsymbol{\beta}]} \langle u_{j_i}^{(0,\boldsymbol{\beta})} \cdots u_{j_1}^{(0,\boldsymbol{\beta})} \cdot \lambda \rvert, \end{align*}$$

and the claim follows.

4 Operator dynamics

In this section, we describe the dynamics of particle processes of Dieker and Warren [Reference Dieker and WarrenDW08] using the deformed Schur operators $U_i^{(0,\boldsymbol {\beta })}$ and $u_i^{(0, \boldsymbol {\beta })}$ defined in Section 3 acting on the corresponding state of free fermions.

As we will see below, the geometric distribution will correspond to using homogeneous (noncommutative) symmetric functions in terms of these operators, whereas the Bernoulli distribution uses the elementary symmetric functions.

For this section, our proof of Theorem 1.1 will consist of showing the claim for a single time step. The general case will follow from the branching rules (Proposition 2.3) and the Markov property, where both of these say the general case can be described as a product of the single steps summed over all possible routes from $\mu $ to $\lambda $ . In more detail, consider Case A as an example, but all other cases are similar. We assume

(4.1) $$ \begin{align} \displaystyle \mathsf{P}(Y_{n}^{\mathrm{A}}=\lambda|Y_{0}^{\mathrm{A}}=\mu )=\prod_{j=1}^{\ell} \prod_{i=1}^n (1-\pi_j x_i) \boldsymbol{\pi}^{\lambda/\mu} g_{\lambda/\mu}(x_1,\dots,x_n;\boldsymbol{\pi}^{-1}), \end{align} $$

and show n replaced by $n+1$ holds. In the next subsection, we show

(4.2) $$ \begin{align} \mathsf{P}(Y_k^{\mathrm{A}}=\lambda|Y_{k-1}^{\mathrm{A}}=\mu)= \prod_{j=1}^\ell (1-\pi_j x_k) \boldsymbol{\pi}^{\lambda/\mu} g_{\lambda/\mu}(x_k;\boldsymbol{\pi}^{-1}), \end{align} $$

which corresponds to $n=1$ case of (4.1). By the Markov property, the transition probabilities satisfy

(4.3) $$ \begin{align} \mathsf{P}(Y_{n+1}^{\mathrm{A}}=\lambda|Y_{0}^{\mathrm{A}}=\mu )= \sum_{ \mu \subseteq \nu \subseteq \lambda } \mathsf{P}(Y_{n+1}^{\mathrm{A}}=\lambda|Y_{n}^{\mathrm{A}}=\nu) \mathsf{P}(Y_n^{\mathrm{A}}=\nu|Y_{0}^{\mathrm{A}}=\mu). \end{align} $$

Inserting (4.1) and (4.2) into the right-hand side of (4.3) and applying the branching rule for the refined dual Grothendieck polynomials, we have

(4.4) $$ \begin{align} \mathsf{P}(Y_{n+1}^{\mathrm{A}}=\lambda|Y_{0}^{\mathrm{A}}=\mu )=& \sum_{ \mu \subseteq \nu \subseteq \lambda } \prod_{j=1}^\ell (1-\pi_j x_{n+1}) \boldsymbol{\pi}^{\lambda/\nu} g_{\lambda/\nu}(x_{n+1};\boldsymbol{\pi}^{-1}) \nonumber \\ &\times \prod_{j=1}^{\ell} \prod_{i=1}^n (1-\pi_j x_i) \boldsymbol{\pi}^{\nu/\mu} g_{\nu/\mu}(x_1,\dots,x_n;\boldsymbol{\pi}^{-1}) \nonumber \\ =&\prod_{j=1}^{\ell} \prod_{i=1}^{n+1} (1-\pi_j x_i) \boldsymbol{\pi}^{\lambda/\mu} \sum_{ \mu \subseteq \nu \subseteq \lambda } g_{\lambda/\nu}(x_{n+1};\boldsymbol{\pi}^{-1}) g_{\nu/\mu}(x_1,\dots,x_n;\boldsymbol{\pi}^{-1}) \nonumber \\ =&\prod_{j=1}^{\ell} \prod_{i=1}^{n+1} (1-\pi_j x_i) \boldsymbol{\pi}^{\lambda/\mu} g_{\lambda/\mu}(x_1,\dots,x_n,x_{n+1};\boldsymbol{\pi}^{-1}), \end{align} $$

which completes the induction.

4.1 Pushing operators

Suppose the particles are at positions given by the partition $\mu $ . Then it is easy to see that the action $u_j \cdot \mu $ corresponds to the j-th particle trying to move one step to the right at time i. Indeed, if the j-th particle is also at a site containing smaller particles, then taking the smallest partition containing $\mu + \epsilon _j$ corresponds to pushing the smaller particles. More specifically, if this pushes s particles, then we obtain the scalar $\beta _{j-s} \cdots \beta _{j-1}$ (if $s = 0$ , then it just is the resulting partition).

Example 4.1. Consider $4$ particles. The action

is identified with the particle motion

where the arrows denote the particle being pushed.

We rewrite the action of our noncommutative operators to match the form of Theorem 1.1.

Lemma 4.2. Let $\boldsymbol {\beta } = \boldsymbol {\pi }^{-1}$ and $u_j = u_j^{(0, \pi ^{-1})}$ . Then for any $(j_1, j_2, \dotsc , j_k)$ , we have

$$\begin{align*}u_{j_1} u_{j_2} \cdots u_{j_k} \cdot \mu = \frac{\pi_{j_1} \pi_{j_2} \cdots \pi_{j_k}}{\boldsymbol{\pi}^{\lambda / \mu}} \cdot \lambda. \end{align*}$$

Proof. It is sufficient to prove this when $k = 1$ since $\boldsymbol {\pi }^{\lambda / \mu } = \boldsymbol {\pi }^{\lambda /\nu } \boldsymbol {\pi }^{\nu / \mu }$ for any $\nu $ . By definition, $U_j \cdot \mu = \beta _{j-s} \cdots \beta _{j-2} \beta _{j-1} \cdot \lambda $ for some s (that is determined by $\mu $ ). Since $\boldsymbol {\pi }^{\lambda / \mu } = \pi _{j-s} \cdots \pi _{j-2} \pi _{j-1} \pi _j \cdot \lambda $ , we have $u_j \cdot \mu = \frac {\pi _j}{\boldsymbol {\pi }^{\lambda /\mu }} \cdot \lambda $ as desired.

Proof for Case A.

Now let us consider the Case A transition probability for a single time step $\mathsf {P}_A(\lambda |\mu )$ at time i. We can write this as

(4.5) $$ \begin{align} \mathsf{P}_A(\lambda | \mu) = \pi_{j_1} \pi_{j_2} \cdots \pi_{j_k} x_i^k \prod_{j=1}^{\infty} (1 - \pi_j x_i) \end{align} $$

where k is the number of columns in $\lambda / \mu $ and $j_1 \geq j_2 \geq \cdots \geq j_k$ (which is necessarily unique). To match the notation in Theorem 1.1, we specialize $\boldsymbol {\beta } = \boldsymbol {\pi }^{-1}$ . Since we update the particles from largest-to-smallest, the above discussion yields that we can write our time evolution with $\boldsymbol {\pi } = 1$ as

$$\begin{align*}\mathcal{T}_A = \sum_{k=0}^{\infty} h_k(x_i \mathbf{u}) = \sum_{k=0}^{\infty} x_i^k h_k(\mathbf{u}), \end{align*}$$

where we have scaled all the $\mathbf {u}$ operators by $x_i$ to introduce our time-dependent parameters. Indeed, if we apply $h_k(x_i \mathbf {u})$ , then in terms of the particle dynamics, we are moving all particles a total number of k steps with probability (4.5). To see again the particle-dependent parameters for the time evolution, if we bring the $\mathcal {T}_A$ inside the matrix coefficient, we must factor out the total position change factors $\boldsymbol {\pi }^{\lambda /\mu }$ as this cancels with the $\pi _{j_1}^{-1} \cdots \pi _{j_k}^{-1}$ by applying the various $u_j$ to $\mu $ until we get $\lambda $ (see (3.5)). As an example, $u_j \cdot \mu = \pi _{j-s} \cdots \pi _j \cdot {}_{[\boldsymbol {\pi }^{-1}]} \langle u_j \cdot \mu \rvert $ . Therefore, we can write the transition probability (4.5) as

(4.6) $$ \begin{align} \begin{aligned} \mathsf{P}_A(\lambda|\mu) & = \boldsymbol{\pi}^{\lambda/\mu} \prod_{j=1}^{\infty} (1 - \pi_j x_i) \cdot {}_{[\boldsymbol{\pi}^{-1}]} \langle \mathcal{T}_A \cdot \mu | \lambda \rangle_{[\boldsymbol{\pi}^{-1}]} \\ & = \boldsymbol{\pi}^{\lambda/\mu} \prod_{j=1}^{\infty} (1 - \pi_j x_i) \sum_{k=0}^{\infty} x_i^k \cdot {}_{[\boldsymbol{\pi}^{-1}]} \langle h_k(\mathbf{u}) \cdot \mu | \lambda \rangle_{[\boldsymbol{\pi}^{-1}]}. \end{aligned} \end{align} $$

Alternatively we can see (4.6) by noting in the second line, only the term $u_{j_1} u_{j_2} \cdots u_{j_k}$ is nonzero in the pairing by (2.6) and taking this together with Lemma 4.2. Next, we apply Theorem 3.11, (2.5), and (2.7) to rewrite Equation (4.6) as

$$ \begin{align*} \mathsf{P}_A(\lambda|\mu) & = \boldsymbol{\pi}^{\lambda/\mu} \prod_{j=1}^{\infty} (1 - \pi_j x_i) \sum_{k=0}^{\infty} x_i^k \cdot {}_{[\boldsymbol{\pi}^{-1}]} \langle \mu \rvert H_k(a_1, a_2, \dotsc) \lvert \lambda \rangle_{[\boldsymbol{\pi}^{-1}]} \\ & = \boldsymbol{\pi}^{\lambda/\mu} \prod_{j=1}^{\infty} (1 - \pi_j x_i) \cdot {}_{[\boldsymbol{\pi}^{-1}]} \langle \mu \rvert e^{H(x_i)} \lvert \lambda \rangle_{[\boldsymbol{\pi}^{-1}]} = \boldsymbol{\pi}^{\lambda/\mu} \prod_{j=1}^{\infty} (1 - \pi_j x_i) g_{\lambda/\mu}({\mathbf{x}}; \boldsymbol{\pi}^{-1}). \end{align*} $$

This is precisely the claim of Theorem 1.1 for a single time step.

Proof for Case D.

For Case D, we do the analogous proof using $\mathsf {P}_D(\lambda |\mu )$ starting with the time evolution at $\boldsymbol {\rho } = 1$

$$\begin{align*}\mathcal{T}_D = \sum_{k=0}^{\infty} e_k(x_i \mathbf{u}) = \sum_{k=0}^{\infty} x_i e_k(\mathbf{u}), \end{align*}$$

where here we specialize $\boldsymbol {\beta } = \rho ^{-1}$ . Indeed, after adding back in the particle-dependent parameters like before, we compute

$$ \begin{align*} \mathsf{P}_D(\lambda|\mu) & = \frac{\boldsymbol{\rho}^{\lambda/\mu} \cdot {}_{[\boldsymbol{\rho}^{-1}]} \langle \mathcal{T}_D \cdot \mu | \lambda \rangle_{[\boldsymbol{\rho}^{-1}]}}{\prod_{j=1}^{\infty} (1 + \rho_j x_i)} = \frac{\boldsymbol{\rho}^{\lambda/\mu}}{\prod_{j=1}^{\infty} (1 + \rho_j x_i)} \sum_{k=0}^{\infty} x_i^k \cdot {}_{[\boldsymbol{\rho}^{-1}]} \langle \mu \rvert E_k(a_1, a_2, \dotsc) \lvert \lambda \rangle_{[\boldsymbol{\rho}^{-1}]} \\ & = \frac{\boldsymbol{\rho}^{\lambda/\mu}}{\prod_{j=1}^{\infty} (1 + \rho_j x_i)} \cdot {}_{[\boldsymbol{\rho}^{-1}]} \langle \mu \rvert e^{J(x_i)} \lvert \lambda \rangle_{[\boldsymbol{\rho}^{-1}]} = \frac{\boldsymbol{\rho}^{\lambda/\mu}}{\prod_{j=1}^{\infty} (1 + \rho_j x_i)} j_{\lambda'/\mu'}({\mathbf{x}}; \boldsymbol{\rho}^{-1}). \end{align*} $$

Note that the order of the operators from $e_k(\mathbf {u})$ is applied smallest-to-largest and matches the update rule. We can also see the first equality by using Lemma 4.2.

4.2 Blocking operators

Suppose the particles are at positions given by the partition $\mu $ . Then it is easy to see that the action $U_j \cdot \mu $ corresponds to the j-th particle trying to move one step to the right at time i and keeping the other particles fixed. If the move is blocked by the $(j-1)$ -th particle (being at the same position), then we obtain the scalar $\beta _{j-1}$ . Otherwise the particle moves and we simply obtain the resulting partition.

Example 4.3. Consider $4$ particles. The action

(which also equals $U_4 U_2 \cdot (4, 2, 1, 1)$ ) is identified with the particle motion

Note that the fourth particle is blocked but the second particle moves.

We rewrite the noncommutative operator action to match Theorem 1.1.

Lemma 4.4. Let $\beta _j = \rho _{j+1}$ and $U_j = U_j^{(0,\boldsymbol {\beta })}$ . Then for any $(j_1, j_2, \dotsc , j_k)$ , we have

$$\begin{align*}U_{j_1} U_{j_2} \cdots U_{j_k} \cdot \mu = \frac{\rho_{j_1} \rho_{j_2} \cdots \rho_{j_k}}{\boldsymbol{\rho}^{\lambda / \mu}} \cdot \lambda. \end{align*}$$

Proof. Like the proof of Lemma 4.2, we can reduce the proof to the case $k = 1$ . By definition, we either have (i) $U_j \cdot \mu = \beta _{j-1} \cdot \mu $ and $\lambda = \mu $ or (ii) $U_j \cdot \mu = \kappa _j \cdot \mu $ and $\lambda = \mu + \epsilon _j$ . In each case, it is clear the claim holds.

Proof for Case B.

Let us consider the transition probability for a single time step $\mathsf {P}_B(\lambda ' | \mu ')$ in Case B starting at time i. This proof is largely analogous to that for Case A. Here, we use $U_i := U_i^{(0,\boldsymbol {\alpha })}$ and specialize $\alpha _j = \rho _{j+1}$ . Since we update particles from smallest-to-largest, the above description means one time evolution is given by

$$\begin{align*}\mathcal{T}_B = \sum_{k=0}^{\infty} e_k(x_i \mathbf{U}) = \sum_{k=0}^{\infty} x_i^k e_k(\mathbf{U}). \end{align*}$$

From the analogous arguments as in Case A or the above discussion, we need to multiply by $\boldsymbol {\rho }^{\lambda / \mu }$ to move the time evolution inside the matrix coefficient and account for the movement of all of the particles. Additionally, we scale each $U_j$ by $x_i$ to introduce the time parameters. Therefore, we have

(4.7) $$ \begin{align} \mathsf{P}_B(\lambda' | \mu') = \frac{\boldsymbol{\rho}^{\lambda/\mu} \cdot {}^{[\boldsymbol{\alpha}]} \langle \mathcal{T}_B \cdot \mu | \lambda \rangle^{[\boldsymbol{\alpha}]}}{\prod_{j=1}^{\infty} (1 + \rho_j x_i)} = \sum_{k=0}^{\infty} \frac{\boldsymbol{\rho}^{\lambda/\mu} x_i^k \cdot {}^{[\boldsymbol{\alpha}]} \langle e_k(\mathbf{U}) \cdot \mu | \lambda \rangle^{[\boldsymbol{\alpha}]}}{\prod_{j=1}^{\infty} (1 + \rho_j x_i)}. \end{align} $$

We can also see Equation (4.7) by using Lemma 4.4 with the fact that for any $\ell \geq \ell (\lambda )$ , we have

$$\begin{align*}\mathsf{P}_B(\lambda | \mu) = \prod_{j=1}^{\infty} (1 + \rho_j x_i) \sum_{k=0}^{\infty} \sum_{\substack{j_1 < \cdots < j_k \\ U_{j_k} \cdots U_{j_1} \cdot \mu' = \ast \cdot \lambda'}} \rho_{j_k} \cdots \rho_{j_1} \cdot x_i^k, \end{align*}$$

where $\ast $ represents any nonzero constant; note that $U_{j_1} \cdots U_{j_k} \cdot \mu ' = \ast \cdot \lambda '$ is simply saying the movement of the particles from $\mu $ to $\lambda $ is given by moving (with blocking) the $j_1, j_2, \dotsc , j_k$ particles (in that order). Next, we use (2.5) and Theorem 3.2 to compute

$$ \begin{align*} J_{\lambda'/\!\!/\mu'}({\mathbf{x}}; \boldsymbol{\alpha}) = {}^{[\boldsymbol{\alpha}]} \langle \mu \rvert e^{J(x_i)} \lvert \lambda \rangle^{[\boldsymbol{\alpha}]} & = \sum_{m=0}^{\infty} x_i^m \cdot {}^{[\boldsymbol{\alpha}]} \langle \mu \rvert E_m(a_1, a_2, \ldots) \lvert \lambda \rangle^{[\boldsymbol{\alpha}]} \\ & = \sum_{m=0}^{\infty} x_i^m \cdot {}^{[\boldsymbol{\alpha}]} \langle e_m(\mathbf{U} / \boldsymbol{\alpha}) \cdot \mu | \lambda \rangle^{[\boldsymbol{\alpha}]} \\ & = \sum_{k=0}^{\infty} \sum_{m=k}^{\infty} (-1)^{k-m} x_i^m h_{m-k}(\boldsymbol{\alpha}) \cdot {}^{[\boldsymbol{\alpha}]} \langle e_k(\mathbf{U}) \cdot \mu | \lambda \rangle^{[\boldsymbol{\alpha}]} \\ & = \sum_{k=0}^{\infty} x_i^k \prod_{j=1}^{\infty} (1 + \alpha_j x_i)^{-1} \cdot {}^{[\boldsymbol{\alpha}]} \langle e_k(\mathbf{U}) \cdot \mu | \lambda \rangle^{[\boldsymbol{\alpha}]}. \end{align*} $$

Therefore, comparing this with Equation (4.7) (recall that $\alpha _j = \rho _{j+1}$ ), we have Theorem 1.1,

$$\begin{align*}\mathsf{P}_B(\lambda | \mu) = \prod_{i=1}^n (1 + \rho_1 x_i)^{-1} \boldsymbol{\rho}^{\lambda/\mu} J_{\lambda'/\!\!/\mu'}({\mathbf{x}};\boldsymbol{\alpha}), \end{align*}$$

for one time step.

Alternatively, let us examine Equation (4.7). Necessarily we must have $\lambda / \mu $ being a vertical strip (equivalently $\lambda ' / \mu '$ being a horizontal strip) as otherwise both sides are $0$ , so we now assume $\lambda / \mu $ is a vertical strip. Suppose $m = \left \lvert \lambda \right \rvert - \left \lvert \mu \right \rvert $ particles move, and so each term in the sum is $0$ unless $k \geq m$ . Furthermore, when $k>m$ , we only get a nonzero contribution from the j such that $\lambda _{j-1} = \mu _j$ , where necessarily $j> 1$ . Let $J = \{ j \in \mathbb {Z}_{>1} \mid \lambda _{j-1} = \mu _j \}$ be the (infinite) set of all such indices, and so we have

$$ \begin{align*} \sum_{k=0}^{\infty} x_i^k \cdot {}^{[\boldsymbol{\alpha}]} \langle e_k(\mathbf{U}) \cdot \mu | \lambda \rangle^{[\boldsymbol{\alpha}]} & = \sum_{k=m}^{\infty} x_i^k \sum_{\substack{X \subseteq J \\ \left\lvert X \right\rvert = k-m}} \prod_{j \in X} \rho_j = \sum_{k=m}^{\infty} x_i^k e_{k-m}(\boldsymbol{\rho}_J) \\ & = x_i^m \sum_{k=0}^{\infty} x_i^k e_k(\boldsymbol{\rho}_J) = x_i^m \prod_{j \in J} (1 + \rho_j x_i), \end{align*} $$

where $\boldsymbol {\rho }_J = \{\rho _j \mid j \in J\}$ (since $e_k$ is a symmetric function, we do not need to worry about the order). Therefore, we have

$$\begin{align*}\mathsf{P}_B(\lambda|\mu) = \frac{\boldsymbol{\rho}^{\lambda/\mu} x_i^m}{1 + \rho_1 x_i} \prod_{j \in \widetilde{J}} (1 + \alpha_j x_i)^{-1}, \end{align*}$$

where $\widetilde {J} = \{ j \in \mathbb {Z}_{>0} \mid \lambda _j \neq \mu _{j+1} \}$ (note that we have also shifted the indices). From the combinatorial description of $J_{\lambda ' /\!\!/ \mu '}({\mathbf {x}}_1; \boldsymbol {\alpha })$ , we have obtained Theorem 1.1 for a single time step.

Proof for Case C.

Now let us look at the dynamics for the geometric distribution given by Case C. The proof is similar to the above except we instead replace $e_k(\mathbf {U}) \mapsto h_k(\mathbf {U})$ and $h_k(\boldsymbol {\alpha }) \mapsto e_k(\boldsymbol {\beta })$ , as well as specialize the operators to $\mathbf {U} := \mathbf {U}^{(0,\boldsymbol {\beta })}$ with $\beta _j = \pi _{j+1}$ . So our time evolution operator is $\mathcal {T}_C = \sum _{k=0}^{\infty } h_k(x_i \mathbf {U})$ . Note that the operators are applied in reverse order for $h_k(\mathbf {U})$ , encoding that we are now going from largest-to-smallest in the update order.

For the alternative proof using the combinatorial description of $G_{\lambda /\!\!/\mu }({\mathbf {x}}; \boldsymbol {\beta })$ , some slightly more detailed analysis about the motion of the particle is needed. This is discussed in Section 5.3. Note that it only depends on $\lambda $ and $\mu $ , not on the motion of any other particles.

5 Bijective description

In this section, we provide a bijective proof of Theorem 1.1. For Cases A and D, this is essentially translating the description of the noncommutative operators $\mathbf {u}$ into tableaux through how the particles evolve. For Cases B and C, a little more care is needed as the number of tableaux is not in bijection with the number of intermediate states. However, as in Section 4, we reduce the general case to matching the behavior under one time step evolution.

5.1 Case A: Geometric pushing

This case was discussed in [Reference Motegi and ScrimshawMS25] by going through the last passage percolation (LPP) model. To make this combinatorially explicit, we simply note that the resulting matrix $[G(k,n)]_{k,n}$ is the analog of a Gelfand–Tsetlin pattern for the reverse plane partition. More precisely, the n-th column gives the shape of the entries at most n. From this description, we essentially have a combinatorial proof of Case A of Theorem 1.1. The only other ingredient needed is to note that in $\boldsymbol {\pi }^{\lambda /\mu } g_{\lambda /\mu }({\mathbf {x}}; \boldsymbol {\pi }^{-1})$ , we get a contribution of $\pi _j x_i$ for a box with an i in row j; recalling that we align the entries at the bottom of merged cells. In terms of the more classical description of reverse plane partitions, we are only counting boxes with an i that is not above another box with label i. The remaining factor of $\prod _{j=1}^{\ell } \prod _{i=1}^n (1 - \pi _j x_i)$ comes from the normalization factor in the geometric distribution.

We can make this very precise with a direct correlation between the movement of particles and entries in the reverse plane partition. An entry i in row j (that is not a merged box) corresponds to the j-th particle moving a step at time i. Thus (ignoring the common normalization factor), by conditioning on the j-th particle at time i moving k steps, then the reverse plane partition has k entries with value i at row j (with no i directly below it). The general case of multiple particles moving follows fundamental facts of conditional probability. Hence, we have shown the one step transition probability at time i is given by

$$\begin{align*}\mathsf{P}_{A,1}(\lambda | \mu) = \prod_{j=1}^{\ell} (1 - \pi_j x_i) \boldsymbol{\pi}^{\lambda/\mu} \prod_{j=1}^{\ell-1} \pi_j^{-r_j} \operatorname{\mathrm{wt}}(T) =\prod_{j=1}^{\ell} (1 - \pi_j x_i) \boldsymbol{\pi}^{\lambda/\mu} g_{\lambda/\mu}(x_i;\boldsymbol{\pi}^{-1}) , \end{align*}$$

where T is the unique reverse plane partition of skew shape $\lambda / \mu $ with all boxes filled with i. In some more detail about the expression, recall $r_j$ is the number of boxes of T in row j that have been merged with the box below, and those boxes correspond to the moves of the j-th particle, automatically pushed from the particle behind it. This means that the number $r_j$ corresponds to the distance coming from the push and has to be subtracted from the actual total distance $\lambda _j-\mu _j$ of the j-th particle’s move in the power of $\pi _j$ . This gives the factor $ \prod _{j=1}^{\ell -1} \pi _j^{\lambda _j-\mu _j-r_j} \times \pi _\ell ^{\lambda _\ell -\mu _\ell }=\boldsymbol {\pi }^{\lambda /\mu } \prod _{j=1}^{\ell -1} \pi _j^{-r_j} $ . We also have a factor which is a power of $x_i$ , and the power is equal to the total degree of $\boldsymbol {\pi }^{\lambda /\mu } \prod _{j=1}^{\ell -1} \pi _j^{-r_j} $ . We conclude the precise factor is $x_i^{|\lambda /\mu |-\sum _{j=1}^{\ell -1} r_j}$ . Since all boxes of the reverse plane partition are filled with the same number, we note $|\lambda /\mu |-\sum _{j=1}^{\ell -1} r_j$ is equal to the total number of columns of the skew shape $\lambda /\mu $ , and $x_i^{|\lambda /\mu |-\sum _{j=1}^{\ell -1} r_j}$ is exactly $\operatorname {\mathrm {wt}}(T)$ for one variable case (recall we have fused boxes for the weight).

Hence, we have that the reverse plane partition exactly encodes the movement of all of the particles at time $i = 0, 1, \dotsc , n$ by the branching rules (Proposition 2.3) and the Markov property as described at the beginning of Section 4 (or basic facts of conditional probability).

Example 5.1. We consider Case A with $\lambda = 31$ and $n = 2$ . Hence, we are considering $\ell $ particles the move over two time steps. Since all but the first two particles are fixed, we can ignore them. We have the following reverse plane partitions and states, where we have drawn the merged boxes in gray. An arrow denotes that a particle was pushed.

Below each configuration, we have written the conditional probability except for the normalization constant in the geometric distribution $(1 - \pi _j x_i)$ . Note that these are precisely the terms appearing in $\boldsymbol {\pi }^{\lambda } g_{\lambda }({\mathbf {x}}; \boldsymbol {\pi }^{-1})$ .

5.2 Case D: Bernoulli pushing

Algebraically, this case is simply applying the involution $\omega $ defined by $\omega s_{\lambda } = s_{\lambda '}$ . As such, we should expect the movement of the j-th particle to correspond to the entries in the j-th column, as opposed to the j-th row in Case A. Indeed, this is the case, but we need to reformulate the parameters for the particle process. Recalling from [Reference Patrias and PylyavskyyPP16, Thm. 5.11] (see also [Reference YeliussizovYel17, Prop. 3.4]), we can write $J_{\lambda /\mu }({\mathbf {x}}; \alpha ) = G_{\lambda /\mu }\big ({\mathbf {x}}; \alpha / (1 + \alpha {\mathbf {x}}) \big )$ , it becomes natural to instead consider $\pi _j x_i$ as a rate at which the particle moves rather than the success probability, which becomes $\frac {\pi _j x_i}{1 + \pi _j x_i}$ . We also see this in the normalization factor as

$$\begin{align*}\omega \left( \prod_{j=1}^{\ell} \prod_{i=1}^n (1 - \pi_j x_i) \right) = \prod_{j=1}^{\ell} \prod_{i=1}^n (1 + \pi_j x_i)^{-1}. \end{align*}$$

Since we are considering it as a rate, we use a different set of parameters $\boldsymbol {\rho }$ since we simply require $\rho _j x_i> 0$ for all i and j, as opposed to $\pi _j x_i \in (0, 1)$ . The remainder of the proof is that after we factor out the denominators, we make the same observation as in Case A that an entry i in column j corresponds to the j-th particle moving one step at time i and contributes $\rho _j x_i$ to the (unnormalized) probability.

Example 5.2. Let us consider Case D for $\lambda = 31$ with $n = 2$ . Thus, three particles (all others are fixed) move over two time steps. The possible terms, states, and valued-set tableaux are

where $\pi _{ji} = \frac {\rho _j x_i}{1 + \rho _j x_i}$ and $\sigma _{ji} = 1 - \pi _{ji} = \frac {1}{1 + \rho _j x_i}$ . Once we factor out the denominators, we have $\boldsymbol {\rho }^{\lambda '/\mu '} j_{\lambda /\mu }({\mathbf {x}}; \boldsymbol {\rho }^{-1})$ . In the factors considered below each state, taking it as a $\{0,1\}$ -matrix by setting $\pi _{ij} = 1$ and $\sigma _{ij} = 0$ , we obtain the transposed LPP $\{0,1\}$ -matrix.

Furthermore, the above description of the weight contributions makes a clear connection with $01$ -matrices in [Reference Dieker and WarrenDW08]. Indeed, we can equate each state with a $01$ -matrix $[M_{ij}]_{i,j}$ by setting $M_{ij} = 1$ if and only if we use $\pi _{ij}$ (and necessarily $M_{ij} = 0$ corresponds to $\sigma _{ij}$ ). Since all entries of the matrix and particle motions are done independently, the aforementioned correspondence means the transition probabilities are equal.

5.3 Case C: Geometric blocking

Now we consider the TASEP with the more classical blocking behavior, and as before, we will proceed by conditioning on the motion of particles. As we have the blocking behavior, we expect the first particle to behave differently than all of the subsequent particles. This justifies why the normalization factor only involves $\pi _1$ rather than all $\pi _j$ . Like in Case A, we will identify entries i in row j corresponding to the j-th particle moving at time i. As the first particle will never be blocked, it simply moves at times $i_1 \leq i_2 \leq \cdots \leq i_{\lambda _1}$ .

Next, let us consider the movement of the j-th particle for $j> 1$ . Suppose the first (resp. second) particle at time $i-1$ is $p_{j-1}$ (resp. $p_j$ ). Let p be the position of the second particle at time i. Recall that we update the position of the j-th particle before the $(j-1)$ -th particle. Therefore if $p_j = p_{j-1}$ , then the second particle does not move, which can be phrased as the probability the second particle is at position $p = p_j$ at time i is $ 1 = \sum _{k=0}^{\infty } P_G(w_{j,i} = k). $ Now suppose $p_j < p_{j-1}$ , and if $p < p_{j-1}$ , then this occurs with probability $(1 - \pi _j x_i)(\pi _j x_i)^{p-p_j}$ as there is no blocking behavior. Lastly, if $p = p_{j-1}$ , then the probability is

$$\begin{align*}(1 - \pi_j x_i) \sum_{k=p_{j-1}-p_j}^{\infty} (\pi_j x_i)^k = (\pi_j x_i)^{p_{j-1}-p_j}. \end{align*}$$

Therefore, we want to identify the motion of the particles with semistandard tableaux like for the classical TASEP (with geometric jumping). Yet, whenever the j-th particle does not move the maximal possible distance it can, there are two terms contributing to the probability. We split this into two separate terms that we encode into tableaux, and we do so by having the $-(\pi _j x_i)^{p_{j-1} - p_j + 1}$ term correspond to adding an extra i to a box in row $j-1$ (since we take $\beta _{j-1} = \pi _j$ , recalling $j> 1$ ). As such, the motion of the particles is constructed from a set-valued tableau T by using the semistandard tableau $\min (T)$ built from the smallest entries in each box of T. Indeed, we can only add an i to a box in the j-th row whenever $\Lambda ^{(i)}_{j-1}> \Lambda ^{(i)}_j$ , where $\Lambda ^{(i)}$ is the positions of the particles at time i, which is equivalent to $(\Lambda ^{(i)})_{i=1}^n$ being the Gelfand–Tselin pattern corresponding to $\min (T)$ . Hence, we still have the normalization factor $\prod _{i=1}^n (1 - \pi _1 x_i)$ , which completes the proof of Theorem 1.1 in this case.

As another way to see this, consider evolution of the particles from time $t=i-1$ with positions $(\mu _1,\dots ,\mu _\ell )$ to time $t=i$ with positions $(\lambda _1,\dots ,\lambda _\ell )$ . The preceding discussion means that when the j-th particle ( $j=2,\dots ,\ell $ ) is blocked by the $(j-1)$ -th particle, which corresponds to the case $\lambda _j=\mu _{j-1}$ , summing up conditional probabilities gives $(\pi _j x_i)^{\lambda _j-\mu _j}$ . If it is not blocked ( $\lambda _j \neq \mu _{j-1}$ ), the conditional probability is $(\pi _j x_i)^{\lambda _j-\mu _j}(1-\pi _j x_i)$ . The unified expression for the factor coming from the move of the j-th particle is $(\pi _j x_i)^{\lambda _j-\mu _j}(1-(1-\delta _{\lambda _j, \mu _{j-1}}) \pi _j x_i)$ , where $\delta _{ij}$ is the Kronecker delta: $\delta _{ij}=1$ if $i \neq j$ and 0 otherwise. As for the first particle, there is nothing to block its motion and we have the factor $(1-\pi _1 x_i)(\pi _1 x_i)^{\lambda _1-\mu _1}$ . Hence we have

$$ \begin{align*} \mathsf{P}_{C,1}(\lambda | \mu)&= (1-\pi_1 x_i)(\pi_1 x_i)^{\lambda_1-\mu_1} \prod_{j=2}^\ell (\pi_j x_i)^{\lambda_j-\mu_j}(1-(1-\delta_{\lambda_j, \mu_{j-1}}) \pi_j x_i) \\ &=(1-\pi_1 x_i) \boldsymbol{\pi}^{\lambda/\mu} x_i^{|\lambda/\mu|} \prod_{j=2}^\ell (1-(1-\delta_{\lambda_j, \mu_{j-1}}) \pi_j x_i). \end{align*} $$

We also have the refinement of [Reference YeliussizovYel17, Prop. 8.8]:

(5.1) $$ \begin{align} G_{\lambda /\!\!/ \mu}(x_i;\boldsymbol{\pi})=x_i^{|\lambda/\mu|} \prod_{j=2}^\ell (1-(1-\delta_{\lambda_j, \mu_{j-1}}) \pi_j x_i). \end{align} $$

We can see (5.1) holds by the analogous refinement of the combinatorial description in [Reference YeliussizovYel19, Thm. 4.6] by using $\beta _j$ if a contribution to $\beta $ occurs on the j-th row. Alternatively, we can deduce (5.1) by applying (2.1) and the combinatorial description of $G_{\lambda /\mu }(x_i; \boldsymbol {\pi })$ (where there is a unique semistandard tableau).

Example 5.3. Consider $\lambda = 631$ with $n = 5$ . A minimal tableau for particle motions is

The set-valued tableau T with the largest degree with this minimal tableau $\min (T)$ is

Every other set-valued tableau $T'$ with $\min (T') = \min (T)$ is formed by removing some of the extra (bold) entries in T; in other words, each of these bold entries can be chosen to be added independently to yield a valid set-valued tableau $T'$ with $\min (T') = \min (T)$ . Note that each of the bold entries correspond to a case where a particle does not move its maximal possible distance. Note that at time $i = 4$ , the third particle has moved its maximum possible distance since the update order is left-to-right and it becomes blocked by the second particle. Thus, we do not have an bold $\mathbf {\color {darkred}4}$ appearing in the second row. We can also consider infinitely other particles, which are all blocked, and so they do not contribute to the probability.

Remark 5.4. This can also be compared with the solvable vertex model from [Reference Motegi and SakaiMS13, Reference Motegi and SakaiMS14] with $\boldsymbol {\beta } = \beta $ to see this choice. We remark that the refined $\boldsymbol {\beta }$ version can be constructed from a “trivially” colored lattice model where the colored paths do not cross (which is different than the one used in [Reference Buciumas, Scrimshaw and WeberBSW20, Thm. 3.6]) with $\beta _i$ corresponding to color i.

5.4 Case B: Bernoulli blocking

Essentially this is modifying Case C in the same way as we modified Case A to obtain Case D. While in this case, things are seemingly very different since $J_{\lambda }({\mathbf {x}}_n; \boldsymbol {\alpha })$ is always a formal power series, we see this on the probability side by considering the power series expansion of the rate

$$\begin{align*}\frac{1}{1 + \rho_j x_i} = \sum_{k=0}^{\infty} (-\rho_j x_i)^k. \end{align*}$$

Thus, if there are k extra entries of i in column $j-1$ , then this corresponds to choosing $(-\rho _j x_i)^k$ in the above expansion. Like Case C, the movement of the particles for a multiset-valued tableau T is described by the semistandard tableaux $\min (T)$ formed by taking the smallest entries in each box. In contrast to Case C, we can always repeat any entry i in $\min (T)$ (within its box) an arbitrary number of times and the result remains a multiset-valued tableau; this reflects that we multiply every entry by $(1 + \rho _j x_i)^{-1}$ .

Example 5.5. Let us take $\lambda = 4311$ with $n =5$ . Then an example of the correspondence between a semistandard tableau $\min (T)$ and the particle motions is

Any multiset valued tableau with $\min (T)$ will have a corresponding set-valued tableau (which does not change $\min (T)$ ) whose entries are a subset of each box of

We have written every entry in bold to indicate (and emphasize) that we can repeat every entry as many times as we desire.

6 Multipoint distributions

In this section, we will compute certain multipoint distributions at a single time associated with the TASEPs we consider here. We will specifically focus on Case A and Case C as the other cases can be shown by applying the $\omega $ involution. We give determinant formulas for the multipoint distributions for the general cases. When starting from the step initial condition, we show our formulas reduce to specializations of (dual) Grothendieck polynomials.

6.1 Pushing

We begin by stating a straightforward extension of [Reference Motegi and ScrimshawMS25, Cor. 3.14], which can be proven using the lattice model given therein. However, we will sketch a proof using our free fermion presentation. To state the claim, we need the flagged Schur function, which we denote by $s_{\lambda ,\phi }({\mathbf {x}})$ for a flagging $\phi $ . Let $\lambda $ be a partition and $\phi $ a flagging (a sequence of integers). The flagged Schur function $s_{\lambda ,\phi }({\mathbf {x}})$ is defined by

$$\begin{align*}s_{\lambda,\phi}({\mathbf{x}}) = \sum_T \operatorname{\mathrm{wt}}(T), \end{align*}$$

where the sum is over all semistandard Young tableaux T of shape $\lambda $ such that the entries in row i are at most $\phi _i$ .

Proposition 6.1. Let $\ell = \ell (\lambda )$ . We have

(6.1) $$ \begin{align} s_{\lambda,\phi}({\mathbf{x}}, \boldsymbol{\beta}_{\ell}) = \sum_{\mu \subseteq \lambda} \boldsymbol{\beta}^{\lambda-\mu} g_{\mu}({\mathbf{x}}; \boldsymbol{\beta}), \end{align} $$

where $\phi $ is the flagging such that the entries in the i-th row are at most $\beta _i$ .

Proof. From [Reference IwaoIwa23, Ex. 2.6], the flagged Schur function can be written as

$$\begin{align*}s_{\lambda,\phi}({\mathbf{x}}_n, \boldsymbol{\beta}_{\ell}) = \langle \emptyset \rvert e^{H({\mathbf{x}}_n)} \lvert \lambda \rangle_{(\boldsymbol{\beta})}, \qquad\qquad \text{ where } \lvert \lambda \rangle_{(\boldsymbol{\beta})} := \prod^{\rightarrow}_{1 \leq i \leq \ell} \left( e^{H(\beta_i)} \psi_{\lambda_i-i} \right) \lvert -\ell \rangle. \end{align*}$$

We want to show that

(6.2) $$ \begin{align} \lvert \lambda \rangle_{(\boldsymbol{\beta})} = e^{H(\beta_1)} \lvert \lambda \rangle_{[\vec{\boldsymbol{\beta}}]} = \sum_{\mu \subseteq \lambda} \boldsymbol{\beta}^{\lambda / \mu} \lvert \mu \rangle_{[\boldsymbol{\beta}]}, \end{align} $$

where $\vec {\boldsymbol {\beta }} = (\beta _2, \beta _3, \ldots )$ denotes the shifted parameters and the last equality is the branching rule [Reference Motegi and ScrimshawMS25, Cor. 3.19] (which we will provide a free fermionic proof). The first equality is immediate from the definitions. We will prove the last equality in (6.2) by using (2.6), which reduces the claim to showing ${}_{[\boldsymbol {\beta }]} \langle \mu | \lambda \rangle _{(\boldsymbol {\beta })} = \boldsymbol {\beta }^{\lambda - \mu }$ for $\mu \subseteq \lambda $ and $0$ otherwise. Indeed, the claim follows from a similar proof to that in [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 3.10] except we now have the i-th diagonal entry equal to $\beta _i^{\lambda _i - \mu _i}$ due to the shift. Hence, Equation (6.2) yields

(6.3) $$ \begin{align} s_{\lambda,\phi}({\mathbf{x}}_n, \boldsymbol{\beta}_{\ell}) = \langle \emptyset \rvert e^{H({\mathbf{x}}_n)} \lvert \lambda \rangle_{(\boldsymbol{\beta})} = \sum_{\mu \subseteq \lambda} \boldsymbol{\beta}^{\lambda - \mu} \langle \emptyset \rvert e^{H({\mathbf{x}}_n)} \lvert \mu \rangle_{[\boldsymbol{\beta}]} = \sum_{\mu \subseteq \lambda} \boldsymbol{\beta}^{\lambda-\mu} g_{\mu}({\mathbf{x}}_n; \boldsymbol{\beta}), \end{align} $$

where we used (2.7) with the fact ${}_{[\boldsymbol {\beta }]} \langle \emptyset \rvert = \langle \emptyset \rvert $ for the last equality.

Remark 6.2. Our proofs of Proposition 6.1 can be trivially generalized to give a formula for the expansion of $s_{\lambda ,\phi }({\mathbf {x}}, \boldsymbol {\beta }_{\ell })$ into $g_{\mu }({\mathbf {x}}_n; \boldsymbol {\beta })$ for an arbitrary flag $\phi $ with the coefficients both as a combinatorial formula (from [Reference Motegi and ScrimshawMS25]) or as a determinant (from Wick’s theorem or the LGV lemma).

Next we compute formulas for the multipoint distribution. Using the fact that we must have $G(i, n) \geq G(i+1, n)$ , for $\ell $ particles, the k-point distribution is equivalent to the $\ell $ -point distribution. In particular, we consider $1 = i_1 \geq i_2 \geq \cdots \geq i_k$ , and we have

$$\begin{align*}\mathsf{P}(G(i_k,n) \leq \lambda_{i_k}, \dotsc, G(i_1,n) \leq \lambda_{i_1}) = \mathsf{P}(G(\ell,n) \leq \lambda_{\ell}, \dotsc, G(1,n) \leq \lambda_1), \end{align*}$$

where $\lambda _j = \lambda _{i_k}$ with k maximal such that $i_k \leq j$ . The ordering on $\{i_k\}$ does not lose any generality, but we do require knowledge about the maximum distance the first particle can move. As such, we will use the notation

$$\begin{align*}\mathsf{P}_{\leq,n}(\lambda | \nu) := \mathsf{P}(G(\ell,n) \leq \lambda_{\ell}, \dotsc, G(1,n) \leq \lambda_1 | \mathbf{G}(0) = \nu). \end{align*}$$

Theorem 6.3. For Case A, the multipoint distribution with $\ell $ particles is given by

$$\begin{align*}\mathsf{P}_{\leq,n}(\lambda | \nu) = \boldsymbol{\pi}^{\lambda/\nu} \prod_{i=1}^{\ell} \prod_{j=1}^n (1 - \pi_i x_j) \cdot \det \big[ h_{\lambda_i - \nu_j + j - i}({\mathbf{x}} \sqcup \boldsymbol{\pi}_i^{-1} / \boldsymbol{\pi}_{j-1}^{-1}) \big]_{i,j=1}^{\ell}. \end{align*}$$

Proof. Using (6.2), we compute

$$ \begin{align*} \mathsf{P}_{\leq,n}(\lambda|\nu) & = \prod_{i=1}^{\ell} \prod_{j=1}^n (1 - \pi_i x_j) \sum_{\nu \subseteq \mu \subseteq \lambda} \boldsymbol{\pi}^{\mu / \nu} g_{\mu/\nu}({\mathbf{x}}_n; \boldsymbol{\pi}^{-1}) \\ & = \boldsymbol{\pi}^{\lambda/\nu} \prod_{i=1}^{\ell} \prod_{j=1}^n (1 - \pi_i x_j) \cdot {}_{[\boldsymbol{\pi}^{-1}]} \langle \nu \rvert e^{H({\mathbf{x}}_n)} \lvert \lambda \rangle_{(\boldsymbol{\pi}^{-1})}. \end{align*} $$

Next we apply Wick’s theorem to compute ${}_{[\boldsymbol {\pi }^{-1}]} \langle \nu \rvert e^{H({\mathbf {x}}_n)} \lvert \lambda \rangle _{(\boldsymbol {\pi }^{-1})} = \det [ \langle -\ell \rvert P_j Q_i \lvert -\ell \rangle ]_{i,j=1}^{\ell }$ , where

$$ \begin{align*} P_j & := e^{H(\boldsymbol{\pi}_{j-1}^{-1})} \psi^*_{\nu_j-j}e^{-H(\boldsymbol{\pi}_{j-1}^{-1})} = \sum_{m=0}^{\infty} h_m(\emptyset / \boldsymbol{\pi}_{j-1}^{-1}) \psi^*_{\nu_j-j+m}, \\ Q_i & := e^{H({\mathbf{x}} \sqcup \boldsymbol{\pi}_i^{-1})} \psi_{\lambda_i-i} e^{-H({\mathbf{x}} \sqcup \boldsymbol{\pi}_i^{-1})} = \sum_{k=0}^{\infty} h_k({\mathbf{x}} \sqcup \boldsymbol{\pi}_i^{-1}) \psi_{\lambda_i-i-k}. \end{align*} $$

Therefore, the entries in the determinant are (cf. [Reference IwaoIwa23, Prop. 2.4])

$$ \begin{align*} \langle -\ell \rvert P_j Q_i \lvert -\ell \rangle & = \sum_{m=0}^{\infty} \sum_{k=0}^{\infty} h_m(\emptyset / \boldsymbol{\pi}_{j-1}^{-1}) h_k({\mathbf{x}} \sqcup \boldsymbol{\pi}_i^{-1}) \cdot \langle -\ell \rvert \psi^*_{\nu_j-j+m} \psi_{\lambda_i-i-k} \lvert -\ell \rangle \\ & = \sum_{k=0}^{\infty} h_{\lambda_i - \nu_j + j - i - k}(\emptyset / \boldsymbol{\pi}_{j-1}^{-1}) h_k({\mathbf{x}} \sqcup \boldsymbol{\pi}_i^{-1}) \\ & = h_{\lambda_i - \nu_j + j - i}({\mathbf{x}} \sqcup \boldsymbol{\pi}_i^{-1} / \boldsymbol{\pi}_{j-1}^{-1}), \end{align*} $$

yielding the claim.

Alternative proof.

We give another proof of Theorem 6.3 using the skew Cauchy formula (Theorem 2.4). If we take $\mu = \emptyset $ and the specializations $\boldsymbol {\alpha } = 0$ and ${\mathbf {x}} = \boldsymbol {\pi }^{-1}$ , we obtain

$$\begin{align*}\sum_{\lambda \supseteq \nu} \boldsymbol{\pi}^{\lambda} g_{\lambda/\nu}({\mathbf{y}}; \boldsymbol{\pi}) = \prod_i \frac{1}{1 - \pi_1 y_i} g_{\nu}({\mathbf{x}}; \boldsymbol{\beta}) \boldsymbol{\pi}^{\nu} \end{align*}$$

since $G_{\lambda }(\boldsymbol {\pi }^{-1}; \boldsymbol {\pi }) = \boldsymbol {\pi }^{\lambda }$ by directly examining the bi-alternate formula [Reference Hwang, Jang, Soo Kim, Song and SongHJK+24, Reference Hwang, Jang, Soo Kim, Song and SongHJK+25] (see also [Reference Fujii, Nobukawa and ShimazakiFNS23] for a description of other ways to verify this identity). The claim follows from the Jacobi–Trudi formula (see, e.g., [Reference Hwang, Jang, Soo Kim, Song and SongHJK+24, Thm. 6.1] or [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 4.1]).

Unless otherwise stated, we will henceforth use the flagging $\phi $ from Proposition 6.1 for our flagged Schur functions. As a special case of Theorem 6.3 using Proposition 6.1, we obtain

$$ \begin{align*} \mathsf{P}_{\leq,n}(\lambda | \emptyset) & = \prod_{i=1}^{\ell} \prod_{j=1}^n (1 - \pi_i x_j) \sum_{\mu \subseteq \lambda} \boldsymbol{\pi}^{\mu} g_{\mu}({\mathbf{x}}_n; \boldsymbol{\pi}^{-1}) = \boldsymbol{\pi}^{\lambda} \prod_{i=1}^{\ell} \prod_{j=1}^n (1 - \pi_i x_j) s_{\lambda,\phi}({\mathbf{x}}_n, \boldsymbol{\pi}_{\ell}^{-1}). \end{align*} $$

Furthermore, this recovers [Reference Motegi and ScrimshawMS25, Thm. 4.26] by taking $\lambda $ to be an $\ell \times m$ rectangle, which allows us to forget about the flagging $\phi $ .

Remark 6.4. We want to compare Theorem 6.3 to [Reference Johansson and RahmanJR22, Thm. 2]. We begin by following [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 4.18] with the substitution $w \mapsto z^{-1}$ to write the entries of the determinant as the integral

$$\begin{align*}h_{\lambda_i-\nu_j-i+j}({\mathbf{x}}_n \sqcup \boldsymbol{\pi}_i^{-1} / \boldsymbol{\pi}_{j-1}^{-1}) = \frac{1}{2\pi\mathbf{i}} \oint_{\gamma_r} \frac{\prod_{k=1}^{j-1} (1 - \pi_k^{-1} z^{-1})}{\prod_{k=1}^i (1 - \pi_k^{-1} z^{-1})} \frac{z^{\lambda_i - \mu_j - i + j}}{\prod_{k=1}^n (1 - x_k z^{-1})} \, \frac{dz}{z}, \end{align*}$$

where $\gamma $ is a counterclockwise oriented circle of radius $r> \left \lvert x_1 \right \rvert , \left \lvert \pi _1^{-1} \right \rvert , \left \lvert x_2 \right \rvert , \left \lvert \pi _2^{-1} \right \rvert , \ldots $ and $\mathbf {i} = \sqrt {-1}$ . Hence, we can express $\mathsf {P}(G(\ell ,n) \leq \lambda _{\ell }, \dotsc , G(1,n) \leq \lambda _1 | \mathbf {G}(0) = \nu ) = \det [F_{ij}]_{i,j=1}^{\ell }$ , where

(6.4) $$ \begin{align} F_{ij} = \frac{1}{2\pi\mathbf{i}} \oint_{\gamma_r} \frac{(\pi_i z)^{\lambda_i}}{(\pi_j z)^{\mu_j}} \frac{\prod_{k=1}^{j-1} (z - \pi_k^{-1})}{\prod_{k=1}^i (z - \pi_k^{-1})} \prod_{k=1}^n \frac{1 - x_k \pi_i}{1 - x_k z^{-1}} \, dz. \end{align} $$

This is the transpose of the expression in [Reference Johansson and RahmanJR22, Thm. 2] after noting that their requirement is $G(i,n) < \lambda _i + 1$ (this is an equivalent condition since $\lambda _i \in \mathbb {Z}_{\geq 0}$ ) and they use weakly increasing sequences (i.e., the order of the particles is reversed). In particular, we must multiply the integrand by $\prod _{k=1}^{\ell } \frac {z - \pi _k^{-1}}{z - \pi _k^{-1}}$ and reindex the particles and matrix by $j \mapsto \ell + 1 - j$ (so $\pi _j \mapsto \pi _{\ell +1-j}$ as well). A similar transformation shows that [Reference Johansson and RahmanJR22, Thm. 1] is equivalent to the integral formula from [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 4.18] with Theorem 1.1.

Example 6.5. Let us consider $\lambda = (2,1)$ and $\nu = \emptyset $ for Case A. Thus we take $\ell = 2$ , and we assume that $n \geq 2$ . In this computation, we will essentially ignore the normalization factor $C = \prod _{i=1}^{\ell } \prod _{j=1}^n (1 - \pi _i x_j)$ , as that factor clearly cancels. By the definition and Theorem 1.1,

$$ \begin{align*} C^{-1} \cdot \mathsf{P}_{\leq, n}(\lambda | \emptyset) & = C^{-1} \cdot \mathsf{P}(G(2,n) \leq \lambda_2, G(1,n) \leq \lambda_1) \\ & = C^{-1} \cdot \big( \mathsf{P}(\mathbf{G}(n) = (2,1)) + \mathsf{P}(\mathbf{G}(n) = (2)) + \mathsf{P}(\mathbf{G} = (1,1)) \\ & \hspace{40pt} + \mathsf{P}(\mathbf{G}(n) = (1)) + \mathsf{P}(\mathbf{G}(n) = \emptyset) \big) \\ & = \pi_1^2 \pi_2 g_{21}({\mathbf{x}}_n, \boldsymbol{\pi}^{-1}) + \pi_1^2 g_{2}({\mathbf{x}}_n, \boldsymbol{\pi}^{-1}) + \pi_1 \pi_2 g_{11}({\mathbf{x}}_n, \boldsymbol{\pi}^{-1}) \\ & \hspace{20pt} + \pi_1 g_{1}({\mathbf{x}}_n, \boldsymbol{\pi}^{-1}) + g_{\emptyset}({\mathbf{x}}_n, \boldsymbol{\pi}^{-1}) \\ & = \pi_1^2 \pi_2 (s_{21} + \pi_1^{-1} s_2) + \pi_1^2 s_2 + \pi_1 \pi_2 (s_{11} + \pi_1^{-1} s_1) + \pi_1 s_1 + 1 \\ & = \pi_1^2 \pi_2 s_{21} + (\pi_1 \pi_2 + \pi_1^2) s_2 + \pi_1 \pi_2 s_{11} + (\pi_1 + \pi_2) s_1 + 1, \end{align*} $$

where we have written $s_{\lambda } = s_{\lambda }({\mathbf {x}}_n)$ for brevity. Next, by the branching rule for flagged Schur functions, we see that

$$ \begin{align*} \pi_1^2 \pi_2 s_{21,\phi}({\mathbf{x}}_n, \boldsymbol{\pi}_2^{-1}) & = \pi_1^2 \pi_2 \big( s_{21}({\mathbf{x}}_n) s_{21/21,\phi}(\boldsymbol{\pi}_2^{-1}) + s_{2}({\mathbf{x}}_n) s_{21/2,\phi}(\boldsymbol{\pi}_2^{-1}) + s_{11}({\mathbf{x}}_n) s_{21/11,\phi}(\boldsymbol{\pi}_2^{-1}) \\ & \hspace{40pt} + s_{1}({\mathbf{x}}_n) s_{21/1,\phi}(\boldsymbol{\pi}_2^{-1}) + s_{\emptyset}({\mathbf{x}}_n) s_{21/\emptyset,\phi}(\boldsymbol{\pi}_2^{-1}) \big) \\ & = \pi_1^2 \pi_2 \big( s_{21} + (\pi_1^{-1} + \pi_2^{-1}) s_{2} + \pi_1^{-1}s_{11} + (\pi_1^{-2} + \pi_1^{-1} \pi_2^{-1}) s_{1} + \pi_1^{-2} \pi_2^{-1} \big) \\ & = \pi_1^2 \pi_2 s_{21} + (\pi_1 \pi_2 + \pi_1^2) s_2 + \pi_1 \pi_2 s_{11} + (\pi_1 + \pi_2) s_1 + 1. \end{align*} $$

Finally, let us compute the determinant from Theorem 6.3:

$$ \begin{align*} \pi_1^2 \pi_2 \det \begin{bmatrix} h_2({\mathbf{x}}, \pi_1^{-1}) & h_3({\mathbf{x}}) \\ h_0({\mathbf{x}}, \pi_1^{-1}, \pi_2^{-1}) & h_1({\mathbf{x}}, \pi_2^{-1}) \end{bmatrix} & = \pi_1^2 \pi_2 ( h_2 + \pi_1^{-1} h_1 + \pi_1^{-2}) \cdot (h_1 + \pi_2^{-1}) - \pi_1^2 \pi_2 h_3 \cdot 1 \\ & = \pi_1^2 \pi_2 (h_{21} - h_3) + \pi_1 \pi_2 h_{11} + (\pi_2 + \pi_1) h_1 + \pi_1^2 h_2 + 1 \\ & = \pi_1^2 \pi_2 s_{21} + (\pi_1 \pi_2 + \pi_1^2) s_2 + \pi_1 \pi_2 s_{11} + (\pi_2 + \pi_1) s_1 + 1. \end{align*} $$

Theorem 6.6. For Case D, the multipoint distribution with $\ell $ particles is given by

$$\begin{align*}\mathsf{P}_{\leq,n}(\lambda | \nu) = \boldsymbol{\rho}^{\lambda/\nu} \prod_{i=1}^{\ell} \prod_{j=1}^n (1 - \rho_i x_j)^{-1} \cdot \det \big[ e_{\lambda_i - \nu_j + j - i}({\mathbf{x}} \sqcup -\boldsymbol{\rho}_{j-1}^{-1} / {-\boldsymbol{\rho}_i^{-1}}) \big]_{i,j=1}^{\ell}. \end{align*}$$

Proof. Analogous to the proof of Theorem 6.3 except we are computing ${}_{[\boldsymbol {\rho }^{-1}]} \langle \nu \rvert e^{J({\mathbf {x}}_n)} \lvert \lambda \rangle _{(\boldsymbol {\rho }^{-1})}$ and use (2.8) to obtain $j_{\lambda /\nu }({\mathbf {x}}; \boldsymbol {\rho }^{-1})$ .

Example 6.7. For this example, we consider the Case D TASEP. Consider $\lambda = (2, 1, 1)$ and $\nu = (1)$ , and so we take $\ell = 3$ and $n \geq 3$ . Note that $\lambda ' = (3,1)$ and $\nu ' = (1)$ , We will (again) ignore the normalization factor $C = \prod _{i=1}^{\ell } \prod _{j=1}^n (1 - \rho _i x_j)^{-1}$ as it clearly is present in all formulas. By the definition and Theorem 1.1, we have

$$ \begin{align*} C^{-1} \cdot \mathsf{P}_{\leq, n}(\lambda | \nu) & = \rho_1 \rho_2 \rho_3 j_{31/1}({\mathbf{x}}; \boldsymbol{\rho}^{-1}) + \rho_1 \rho_2 j_{21/1}({\mathbf{x}}; \boldsymbol{\rho}^{-1}) + \rho_2 \rho_3 j_{3/1}({\mathbf{x}}; \boldsymbol{\rho}^{-1}) \\ & \hspace{20pt} + \rho_2 j_{2/1}({\mathbf{x}}; \boldsymbol{\rho}^{-1}) + \rho_1 j_{11/1}({\mathbf{x}}; \boldsymbol{\rho}^{-1}) + j_{1/1}({\mathbf{x}}; \boldsymbol{\rho}^{-1}) \\ & = \rho_1 \rho_2 \rho_3 (s_3 + s_{21} + \rho_2^{-1} s_2 + \rho_2^{-1} s_{11}) + \rho_1 \rho_2 (s_2 + s_{11}) \\ & \hspace{20pt} + \rho_2 \rho_3 (s_2 + \rho_2^{-1} s_1) + \rho_2 s_1 + \rho_1 s_1 + 1 \\ & = \rho_1 \rho_2 \rho_3 s_3 + \rho_1 \rho_2 \rho_3 s_{21} + (\rho_1 \rho_2 + \rho_1 \rho_3 + \rho_2 \rho_3) s_2 \\ & \hspace{20pt} + (\rho_1 \rho_2 + \rho_1 \rho_3) s_{11} + (\rho_1 + \rho_2 + \rho_3) s_1 + 1. \end{align*} $$

The determinant formula from Theorem 6.6 yields

$$ \begin{align*} & \rho_1 \rho_2 \rho_3 \det \begin{bmatrix} e_1({\mathbf{x}} / (-\rho_1^{-1})) & e_3({\mathbf{x}}) & e_4({\mathbf{x}}, -\rho_2^{-1}) \\ e_{-1}({\mathbf{x}} / {-\boldsymbol{\rho}_2^{-1}}) & e_1({\mathbf{x}} / (-\rho_2^{-1})) & e_2({\mathbf{x}}) \\ e_{-2}({\mathbf{x}} / {-\boldsymbol{\rho}_3^{-1}}) & e_0({\mathbf{x}} / (-\rho_2^{-1}, -\rho_3^{-1})) & e_1({\mathbf{x}} / (-\rho_3^{-1}) \\ \end{bmatrix} \\ & \qquad = \rho_1 \rho_2 \rho_3 \det \begin{bmatrix} e_1 + \rho_1^{-1} & e_3 & e_4 - \rho_2^{-1} e_3 \\ 0 & e_1 + \rho_2^{-1} & e_2 \\ 0 & 1 & e_1 + \rho_3^{-1} \\ \end{bmatrix} \\ & \qquad = \rho_1 \rho_2 \rho_3 (e_1 + \rho_1^{-1}) (e_1 + \rho_2^{-1}) (e_1 + \rho_3^{-1}) - \rho_1 \rho_2 \rho_3 e_2 (e_1 + \rho_1^{-1}) \cdot 1 \\ & \qquad = \rho_1 \rho_2 \rho_3 e_{111} + (\rho_1 \rho_2 + \rho_1 \rho_3 + \rho_2 \rho_3) e_{11} + (\rho_1 + \rho_2 + \rho_3) e_1 + 1 - \rho_1 \rho_2 \rho_3 e_{21} - \rho_2 \rho_3 e_2 \\ & \qquad = \rho_1 \rho_2 \rho_3 (s_3 + 2s_{21} + s_{111}) + (\rho_1 \rho_2 + \rho_1 \rho_3 + \rho_2 \rho_3) (s_2 + s_{11}) + (\rho_1 + \rho_2 + \rho_3) s_1 + 1 \\ & \qquad \hspace{20pt} - \rho_1 \rho_2 \rho_3 (s_{21} + s_{111}) - \rho_2 \rho_3 s_{11} \\ & \qquad = \rho_1 \rho_2 \rho_3 s_3 + \rho_1 \rho_2 \rho_3 s_{21} + (\rho_1 \rho_2 + \rho_1 \rho_3 + \rho_2 \rho_3) s_2 \\ & \qquad \hspace{20pt} + (\rho_1 \rho_2 + \rho_1 \rho_3) s_{11} + (\rho_1 + \rho_2 + \rho_3) s_1 + 1. \end{align*} $$

Analogously to Equation (6.4), in Case D we have $\mathsf {P}_{\leq ,n}(\lambda | \nu ) = \det [F_{ij}]_{i,j=1}^{\ell }$ with

(6.5) $$ \begin{align} F_{ij} = \frac{1}{2\pi\mathbf{i}} \oint_{\gamma_r} \frac{(\rho_i z)^{\lambda_i}}{(\rho_j z)^{\mu_j}} \frac{\prod_{k=1}^{j-1} (z - \rho_k^{-1})}{\prod_{k=1}^i (z - \rho_k^{-1})} \prod_{k=1}^n \frac{1 - x_k z^{-1}}{1 - x_k \rho_i} \, dz. \end{align} $$

6.2 Blocking

Next, let us consider Case C, and recall that $\beta _i = \pi _{i+1}$ . We begin with some preparatory formulas.

Note that $g_{\lambda }(\pi _1; \boldsymbol {\beta }) = \boldsymbol {\pi }^{\lambda }$ from the combinatorial description. Taking the skew Cauchy formula (Theorem 2.4) with the specializations $\boldsymbol {\alpha } = 0$ and ${\mathbf {y}} = \boldsymbol {\pi }_1$ , we obtain

(6.6) $$ \begin{align} \sum_{\lambda \supseteq \nu, \mu} G_{\lambda/\!\!/\mu}({\mathbf{x}}; \boldsymbol{\beta}) \boldsymbol{\pi}^{\lambda/\nu} = \prod_i \frac{1}{1 - \pi_1 x_i} \sum_{\eta \subseteq \nu \cap \mu} G_{\nu/\!\!/\eta}({\mathbf{x}}; \boldsymbol{\beta}) \boldsymbol{\pi}^{\mu/\eta}. \end{align} $$

We will use the notation

$$\begin{align*}\mathsf{P}_{\geq,n}(\nu | \mu) := \mathsf{P}(G(\ell,n) \geq \nu_{\ell}, \dotsc, G(1,n) \geq \nu_1 | \mathbf{G}(0) = \mu). \end{align*}$$

Using Equation (6.6), we obtain an expression for the multipoint distribution for Case C as

(6.7a) $$ \begin{align} \mathsf{P}_{\geq,n}(\nu | \mu) & = \prod_{j=1}^n (1 - \pi_1 x_j) \sum _{\lambda \supseteq \nu,\mu} \boldsymbol{\pi}^{\lambda/\mu} G_{\lambda /\!\!/ \mu}({\mathbf{x}}_n; \boldsymbol{\beta}) \end{align} $$
(6.7b) $$ \begin{align} & = \sum _{\eta \subseteq \nu \cap \mu} \boldsymbol{\pi}^{\nu/\eta} G_{\nu /\!\!/ \eta}({\mathbf{x}}_n; \boldsymbol{\beta}). \end{align} $$

Note that if $\nu = \mu $ , then we obtain a Cauchy–Littlewood type identity with Grothendieck polynomials from the first equality (6.7a) since the total probability is $1$ . That is, we have

(6.8) $$ \begin{align} \sum_{\lambda} G_{\lambda /\!\!/ \mu}({\mathbf{x}}_n; \boldsymbol{\beta}) \boldsymbol{\pi}^{\lambda} = \prod_{i=1}^n \frac{1}{1 - \pi_1 x_i} \boldsymbol{\pi}^{\mu}, \end{align} $$

which is also the skew Pieri rule [Reference Iwao, Motegi and ScrimshawIMS24, Eq. (4.7a)] (which refines [Reference YeliussizovYel19, Thm. 7.10]) with the skew Pieri $\nu = \emptyset $ and the same specializations $\boldsymbol {\alpha } = 0$ and ${\mathbf {y}} = \boldsymbol {\pi }_1$ we used for the skew Cauchy formula. Another special case is when $\mu = \emptyset $ , where we obtain a single $\boldsymbol {\pi }^{\nu } G_{\nu }({\mathbf {x}}_n; \boldsymbol {\beta })$ in (6.7b). This can be expressed as a determinant by the Jacobi–Trudi formula [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 4.1]. Moreover, this is just the specialization $\boldsymbol {\alpha } = 0$ and ${\mathbf {y}} = \boldsymbol {\pi }_1$ of the skew Pieri formula [Reference Iwao, Motegi and ScrimshawIMS24, Eq. (4.7b)].

Noting that we have used the skew Cauchy identity in computing (6.7b), we want to evaluate

(6.9) $$ \begin{align} {}^{[\boldsymbol{\beta}]} \langle \mu \rvert e^{H^*(\pi_1)} e^{H({\mathbf{x}}_n)} \lvert \nu \rangle^{[\boldsymbol{\beta}]} = \prod_{i=1}^{\ell} \prod_{j=1}^n \frac{1}{1-\beta_i x_j} \cdot {}^{[\boldsymbol{\beta}]} \langle \mu \rvert e^{H^*(\pi_1 / \boldsymbol{\beta}_{\ell})} e^{H({\mathbf{x}}_n)} e^{H^*(\boldsymbol{\beta}_{\ell})} \lvert \nu \rangle^{[\boldsymbol{\beta}]}. \end{align} $$

By applying Wick’s theorem similar to the proof of Theorem 6.3 (cf. the proof of [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 4.1]), we obtain the following formula for the general case.

Theorem 6.8. For Case C, the multipoint distribution with $\ell $ particles is given by

$$\begin{align*}\mathsf{P}_{\geq,n}(\nu|\mu) = \prod_{j=2}^{\ell} \prod_{i=1}^n (1-\pi_j x_i)^{-1} \det \big[ h_{\nu_i - \mu_j - i + j}\big({\mathbf{x}} /\!\!/ (\boldsymbol{\pi}_i / \boldsymbol{\beta}_j \big) \big]_{i,j=1}^{\ell}. \end{align*}$$

We remark that the left-hand side of (6.9) is using the vector obtained by applying the $*$ anti-involution to (6.2):

$$\begin{align*}{}^{(\boldsymbol{\pi})} \langle \nu \rvert := \sum_{\lambda \subseteq \nu} \boldsymbol{\pi}^{\nu/\lambda} \cdot {}^{[\boldsymbol{\pi}]} \langle \lambda \rvert. \end{align*}$$

However, our computation for Theorem 6.8 is not simply the $*$ version of Proposition 6.1 as we need to take the pairing with $\lvert \mu \rangle ^{[\boldsymbol {\beta }]}$ , not $\lvert \mu \rangle ^{[\boldsymbol {\pi }]}$ .

We also provide another determinant formula for the multipoint distribution by using the standard probability-theoretic computation to sum over determinants with matrix elements given by integrals.

Theorem 6.9. For Case C, the multipoint distribution with $\ell $ particles is given by

$$ \begin{gather*} \mathrm{P}_{\ge,n}(\nu|\mu) = \prod_{i=1}^\ell \prod_{j=1}^n (1-\pi_i x_j) \boldsymbol{\pi}^{\nu/\mu} \det [ C_{ij} ]_{i,j=1}^\ell, \\ \text{where } C_{ij} = \begin{cases} \displaystyle \oint_{\widetilde{\gamma}_r} \frac{1 }{ (1-\beta_1 w^{-1}) \prod_{k=1}^{j-1} (1-\beta_k w^{-1}) \prod_{m=1}^n (1-x_m w) w^{\nu_1-\mu_j-1+j} } \frac{dw}{2 \pi \mathbf{i} w} & \text{if } i = 1, \\[15pt] \displaystyle \oint_{\widetilde{\gamma}_r} \frac{\prod_{k=1}^{i-2} (1-\beta_k w^{-1}) }{ \prod_{k=1}^{j-1} (1-\beta_k w^{-1}) \prod_{m=1}^n (1-x_m w) w^{\nu_i-\mu_j-i+j} } \frac{dw}{2 \pi \mathbf{i} w} & \text{if } i \ge 2, \end{cases} \end{gather*} $$

with the contour $\widetilde {\gamma }_r$ being a circle centered at the origin with radius r satisfying $0 < r < \left \lvert x_m^{-1} \right \rvert $ for $m = 1, \dotsc , n$ and $r> \left \lvert \beta _1 \right \rvert , \left \lvert \beta _2 \right \rvert , \ldots $ .

Proof. Using Theorem 1.1 and the integral formula [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 4.19], we write the multipoint distribution as

$$ \begin{align*} \mathrm{P}_{\ge,n}(\nu|\mu) & = \prod_{i=1}^\ell \prod_{j=1}^n (1-\pi_i x_j) \sum_{\lambda_1=\nu_1}^\infty \sum_{\lambda_2=\nu_2}^{\lambda_1} \cdots \sum_{\lambda_\ell=\nu_\ell}^{\lambda_{\ell-1}} \boldsymbol{\pi}^{-\mu} \\ & \hspace{20pt} \times \det \Bigg[ \oint_{\widetilde{\gamma}_r} \frac{\pi_i^{\lambda_i} \prod_{k=1}^{i-1} (1-\beta_k w^{-1}) }{ \prod_{k=1}^{j-1} (1-\beta_k w^{-1}) \prod_{m=1}^n (1-x_m w) w^{\lambda_i-\mu_j-i+j} } \frac{dw}{2 \pi \mathbf{i} w} \Bigg]_{i,j=1}^\ell. \end{align*} $$

Inserting the sum $\displaystyle \sum _{\lambda _\ell =\nu _\ell }^{\lambda _{\ell -1}}$ into the $\ell $ -th row, the matrix element in the j-th column becomes

(6.10) $$ \begin{align} \sum_{\lambda_\ell=\nu_\ell}^{\lambda_{\ell-1}} \oint_{\widetilde{\gamma}_r} & \frac{\pi_\ell^{\lambda_\ell} \prod_{k=1}^{\ell-1} (1-\beta_k w^{-1}) }{ \prod_{k=1}^{j-1} (1-\beta_k w^{-1}) \prod_{m=1}^n (1-x_m w) w^{\lambda_\ell-\mu_j-\ell+j} } \frac{dw}{2 \pi \mathbf{i} w} \nonumber \\ = & \oint_{\widetilde{\gamma}_r} \frac{\pi_\ell^{\nu_\ell} \prod_{k=1}^{\ell-2} (1-\beta_k w^{-1}) }{ \prod_{k=1}^{j-1} (1-\beta_k w^{-1}) \prod_{m=1}^n (1-x_m w) w^{\nu_\ell-\mu_j-\ell+j} } \frac{dw}{2 \pi \mathbf{i} w} \nonumber \\ & \hspace{20pt} - \oint_{\widetilde{\gamma}_r} \frac{\pi_\ell^{\lambda_{\ell-1}+1} \prod_{k=1}^{\ell-2} (1-\beta_k w^{-1}) }{ \prod_{k=1}^{j-1} (1-\beta_k w^{-1}) \prod_{m=1}^n (1-x_m w) w^{\lambda_{\ell-1}-\mu_j-(\ell-1)+j} } \frac{dw}{2 \pi \mathbf{i} w}. \end{align} $$

The second term in (6.10) can be eliminated using the $(\ell -1)$ -th row; hence, the matrix elements in the $\ell $ -th row after performing the first sum can be written as

$$\begin{align*}\oint_{\widetilde{\gamma}_r} \frac{\pi_\ell^{\nu_\ell} \prod_{k=1}^{\ell-2} (1-\beta_k w^{-1}) }{ \prod_{k=1}^{j-1} (1-\beta_k w^{-1}) \prod_{m=1}^n (1-x_m w) w^{\nu_\ell-\mu_j-\ell+j} } \frac{dw}{2 \pi \mathbf{i} w}. \end{align*}$$

Iterating this process, we obtain our claim.

For Case B, we again apply the $\omega $ involution, which replaces $e^{H({\mathbf {x}}_n)}$ with $e^{J({\mathbf {x}}_n)}$ , but otherwise the proof is similar (compare the proofs of Theorem 6.3 and Theorem 6.6). Therefore, we obtain the following.

Theorem 6.10. For Case B, the multipoint distribution with $\ell $ particles is given by

$$\begin{align*}\mathsf{P}_{\geq,n}(\nu|\mu) = \prod_{i=2}^{\ell} \prod_{j=1}^n (1-\rho_i x_j) \det \big[ e_{\nu_i - \mu_j - i + j}({\mathbf{x}}_n /\!\!/ ({-\boldsymbol{\rho}_j}/{-\boldsymbol{\rho}_i})) \big]_{i,j=1}^{\ell}. \end{align*}$$

In both of these cases, we can obtain contour integral formulas for the entries in matrices of the multipoint distribution determinants by following [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 4.19] (see also Remark 6.4).

7 Continuous time limit

In this section, we will examine the continuous time limit of these processes. In order to do this, we will take the geometric jumping with ${\mathbf {x}}_{\lfloor t/p \rfloor } = p$ as $p \to 0$ , which takes the geometric distribution with rate $\pi _j x_i$ to an exponential distribution with rate $\pi _j$ . As an example, compare Figure 3 with Figure 2. We will be able to see from the other integral formulas in [Reference Iwao, Motegi and ScrimshawIMS24, Sec. 4.8], that using the geometric jumping will result in the same limit with Bernoulli rates using Theorem 1.1 by noting the positions of the particles (in the bosonic form) are given by the conjugate shapes. Therefore, we only consider the geometric jumping cases; that is, we only consider Case A and Case C.

Figure 2 Samples of the continuous time limit TASEP with $\ell = 500$ particles at time $t = 500$ with rate $\boldsymbol {\pi } = 1$ under the blocking (left) and pushing (right) behavior.

Figure 3 Samples of TASEP with $\ell = 500$ particles with $\boldsymbol {\pi } = 1$ and ${\mathbf {x}} = p = 0.01$ with $n = \lfloor 500 / p \rfloor $ under the blocking behavior (left) and pushing behavior (right).

7.1 Blocking

We will consider continuous Markov process where particles have independent exponential clocks, where the j-th particle’s clock has rate $\pi _j$ , and when the clock rings, if the $(j-1)$ -th particle is not at the same site, then the j-th particle jumps one step to the right. We take the limit $p \to 0$ in the integral formula [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 4.19], and using the classical formula $e^x = \lim _{n\to \infty } (1 + x/n)^n$ , we obtain the following.

Corollary 7.1. The continuous time limit of Case C, for $t \in (0, \infty )$ , is

$$\begin{align*}\mathsf{P}_C(\mathbf{G}(t) = \lambda| \mathbf{G}(0) = \mu) = \prod_{j=1}^{\ell} e^{-\pi_j t} \boldsymbol{\pi}^{\lambda/\mu} \det\left[ \oint_{\widetilde{\gamma}_r} \frac{\prod_{k=1}^{i-1} (1 - \beta_k w^{-1}) e^{tw}}{\prod_{k=1}^{j-1} (1 - \beta_k w^{-1}) w^{(\lambda_i - i) - (\mu_j - j)}} \frac{dw}{2\pi\mathbf{i} w} \right]_{i,j=1}^{\ell}, \end{align*}$$

where the contour $\widetilde {\gamma }_r$ is a circle centered at the origin with radius r satisfying $r> \left \lvert \beta _1 \right \rvert , \left \lvert \beta _2 \right \rvert , \ldots $ .

In Corollary 7.1, if we substitute $w^{-1} = z$ , note the j-th particle in the fermionic positioning is at $\nu _j - j$ from the bosonic positions $\nu $ , reindexing the particles $j \mapsto \ell + 1 - j$ (as in Remark 6.4), and taking the transpose of the determinant, we recover [Reference Rákos and SchützRS06, Thm. 1]. Furthermore, by taking the same limit of the multipoint distribution from Theorem 6.9, we recover [Reference Rákos and SchützRS06, Cor.]. On the other hand, these satisfy the corresponding TASEP Kolmogorov’s equation [Reference Rákos and SchützRS06, Eq. (3)] from Theorem 1.1 and boundary conditions [Reference Rákos and SchützRS06, Eq. (2)] as expected:

(7.1) $$ \begin{align} \frac{d}{dt} \mathsf{P}_C(\mathbf{G}(t)=\lambda | \mathbf{G}(0)=\mu) &= \sum_{s=1}^{\ell} \pi_s \mathsf{P}_C(\mathbf{G}(t)=\lambda | \mathbf{G}(0)=\mu) \nonumber\\ &\quad - \sum_{s=1}^{\ell} \pi_s \mathsf{P}_C(\mathbf{G}(t)=\lambda-\epsilon_s | \mathbf{G}(0)=\mu), \end{align} $$
(7.2) $$ \begin{align} \pi_s \mathsf{P}_C(\mathbf{G}(t)=\lambda-\epsilon_s | \mathbf{G}(0)=\mu) &= \pi_{s+1} \mathsf{P}_C(\mathbf{G}(t)=\lambda | \mathbf{G}(0)=\mu) \qquad \text{if } \lambda_s=\lambda_{s+1}. \end{align} $$

See the arXiv version [Reference Iwao, Motegi and ScrimshawIMS23, Thm.-7.2] for more details.

7.2 Pushing

We will consider continuous Markov process where particles have independent exponential clocks, where the j-th particle’s clock has rate $\pi _j$ , and when the clock rings, all particles $j' \leq j$ at the site of the j-th particle move one step to the right. We can similarly describe the transition probability as a determinant of contour integrals for the pushing behavior.

Corollary 7.2. The continuous time limit of Case A, for $t \in (0, \infty )$ , is

$$\begin{align*}\mathsf{P}_A(\mathbf{G}(t) = \lambda| \mathbf{G}(0) = \mu) = \prod_{j=1}^{\ell} e^{-\pi_j t} \boldsymbol{\pi}^{\lambda/\mu} \det\left[ \oint_{\gamma_r} \frac{\prod_{k=1}^{j-1} (1 - \pi_k^{-1} w) e^{tw}}{\prod_{k=1}^{i-1} (1 - \pi_k^{-1} w) w^{(\lambda_i - i) - (\mu_j - j)}} \frac{dw}{2\pi\mathbf{i} w} \right]_{i,j=1}^{\ell}. \end{align*}$$

By the same computations as in the blocking behavior case, we can also show the continuous time limit of Case A also satisfies the Kolmogorov’s equation [Reference Rákos and SchützRS06, Eq. (3)], but the boundary conditions are different due:

$$ \begin{align*} \pi_s \mathsf{P}_A(\mathbf{G}(t) = \lambda + \epsilon_{s+1} | \mathbf{G}(0) = \mu) = \pi_{s+1} \mathsf{P}_A(\mathbf{G}(t) = \lambda | \mathbf{G}(0) = \mu) \qquad \text{ if } \lambda_s = \lambda_{s+1}. \end{align*} $$

8 Canonical particle process

The goal of this section is to describe the particle process whose transition kernel naturally uses the canonical Grothendieck polynomials. We will start by explicitly defining the stochastic process, and then we will show how to interpret it using the noncommutative operators $\mathbf {U}^{(\boldsymbol {\alpha },\boldsymbol {\beta })}$ (in contrast to Section 4).

Recall that $G(j, i)$ denotes the position of the j-th particle at time i. The positions of the particles is defined recursively by the formula

(8.1) $$ \begin{align}G(j,i) = \min\big( G(j,i-1) + w_{ji}, G(j-1,i-1) \big), \end{align} $$

by convention $G(0,i-1) := \infty $ , where the random variable $w_{ij}$ – which now depends on $G(j, i-1)$ – is determined by the defined by

(8.2) $$ \begin{align} \mathsf{P}_{\mathcal{G}}(w_{ji} = m' \mid G(j,i-1) = m) := \frac{1 - \pi_j x_i}{1 + \alpha_{m+m'} x_i} \prod_{k=m}^{m+m'-1} \frac{(\alpha_k + \pi_j) x_i}{1 + \alpha_k x_i}. \end{align} $$

In other words, the j-th particle at time i attempts to jump $w_{ji}$ steps, but can be blocked by the $(j-1)$ -th particle, which updates its position after the j-th particle moves.

Let us digress slightly on why (8.2) is called an inhomogeneous geometric distribution. We can realize it as the waiting time for a failure in sequence of Bernoulli variables (i.e., weighted coin flips), but the k-th trial is given a probability of success $(\alpha _k + \pi _j) x_i (1 + \alpha _k x_i)^{-1}$ . Indeed, we note that the probability of a failure is

$$\begin{align*}1 - \frac{\alpha_k x_i + \pi_j x_i}{1 + \alpha_k x_i} = \frac{1 - \pi_j x_i}{1 + \alpha_k x_i}. \end{align*}$$

Hence, this gives us a sampling algorithm for the distribution $\mathsf {P}_{\mathcal {G}}$ . We illustrate the effectiveness of this sampling in Figure 4. This perspective also allows us to easily see that we have a probability measure on $\mathbb {Z}_{\geq m}$ for any fixed m. The case when $\boldsymbol {\pi } = 0$ can also be seen as a projection of the Warren–Windridge dynamics [Reference Warren and WindridgeWW09]; see also [Reference AssiotisAss23, Sec. 2.2].

Figure 4 A sampling using $10000$ samples of the modified geometric distribution $\mathsf {P}_{\mathcal {G}}$ for $x_i = 1$ , $\pi _j = .5$ , and $\alpha _k = 1 - k e^{-k/2}$ (blue) under the exact distribution (red), which is under the geometric distribution (green).

We will give some remarks on the meaning of the $\boldsymbol {\alpha }$ parameters. From the behavior of the operators $\mathbf {U}^{(\boldsymbol {\alpha },\boldsymbol {\beta })}$ , it would be tempting to consider the $\boldsymbol {\alpha }$ parameters as viscosity, but for $\boldsymbol {\alpha }> 0$ , we have $\mathsf {P}_{\mathcal {G}}(w_{ji} = k)> \mathsf {P}_{Ge}(w_{ji} = k)$ . Thus, in this case, the $\boldsymbol {\alpha }$ parameters act as a current being applied to the system, the strength (and direction) of which can vary at each position. On the other hand, when $\boldsymbol {\alpha } < 0$ , we have $\mathsf {P}_{\mathcal {G}}(w_{ji} = k) < \mathsf {P}_{Ge}(w_{ji} = k)$ , and so indeed $\boldsymbol {\alpha }$ then acts as (position-based) viscosity. See Figure 5 and compare with Figure 3(left). We can also introduce locations where certain particles must stop by having $-\alpha _k = \pi _j$ since this would have $\mathcal {P}_{\mathcal {G}}(w_{ij} = k') = 0$ for all $k'$ that would move the j-th particle past position k.

Figure 5 Samples of blocking TASEP with $\ell = 500$ particles after $n = 50000$ time steps with (left) $\boldsymbol {\pi } = 1$ , ${\mathbf {x}} = 0.01$ , and $\boldsymbol {\alpha } = -0.5$ ; (right) $\boldsymbol {\pi } = 0.5$ , ${\mathbf {x}} = .2$ , and $\alpha _k = 0.5 \sin (k/50)^6$ .

To see how to obtain this process using the noncommutative operators $\mathbf {U}^{(\boldsymbol {\alpha },\boldsymbol {\beta })}$ , we initiate by taking the skew Cauchy formula (Theorem 2.4) with $\nu = \emptyset $ and with the specializations ${\mathbf {y}} = \boldsymbol {\pi }_1$ and $\beta _j = \pi _{j+1}$ , yielding

(8.3) $$ \begin{align} \sum_{\lambda} G_{\lambda/\!\!/\mu}({\mathbf{x}}_n; \boldsymbol{\alpha}, \boldsymbol{\beta}) g_{\lambda}(\boldsymbol{\pi}_1; \boldsymbol{\alpha}, \boldsymbol{\beta}) = \prod_i (1 - \pi_1 x_i)^{-1} g_{\mu}(\boldsymbol{\pi}_1; \boldsymbol{\alpha}, \boldsymbol{\beta}). \end{align} $$

In particular, if we let $\widehat {\lambda }_i = \lambda _i - 1$ for all $1 \leq i \leq \ell (\lambda )$ , then from the combinatorial description of [Reference Hwang, Jang, Soo Kim, Song and SongHJK+25, Thm. 4.2], we have

$$\begin{align*}g_{\lambda}(\boldsymbol{\pi}_1; \boldsymbol{\alpha}, \boldsymbol{\beta}) = \boldsymbol{\pi}^{1^{\ell(\lambda)}} \prod_{(i,j) \in \widehat{\lambda}} (\alpha_i + \pi_j). \end{align*}$$

Hence, Equation (8.3) can be considered a Littlewood-type identity for canonical Grothendieck polynomials. Dividing this by the factor on the right-hand side and taking the term corresponding to $\lambda $ , we obtain a probability distribution for n step random growth process (since we must have $\mu \subseteq \lambda $ and currently the interpretation we have described is only on partitions) given by

(8.4) $$ \begin{align} \mathsf{P}_{\mathcal{C},n}(\lambda | \mu) = \prod_{i=1}^n (1 - \pi_1 x_i) \boldsymbol{\pi}^{1^{\ell(\lambda)} / 1^{\ell(\mu)}} \prod_{(i,j) \in \widehat{\lambda} / \widehat{\mu}} (\alpha_i + \pi_j) G_{\lambda/\!\!/\mu}({\mathbf{x}}_n; \boldsymbol{\alpha}, \boldsymbol{\beta}). \end{align} $$

Note that Equation (8.3) is equivalent to $\sum _{\lambda } \mathsf {P}_{\mathcal {C},n}(\lambda | \mu ) = 1$ for any fixed $\mu $ and n.

Rephrasing Equation (8.4) and adding an $\alpha _0 = 0$ parameter in order to simplify the product in $g_{\lambda }(\boldsymbol {\pi }_1; \boldsymbol {\alpha }, \boldsymbol {\beta })$ , what we have computed are coefficients

$$\begin{align*}C_{\lambda\mu} = \prod_{i=1}^n (1 - \pi_1 x_i) (\vec{\boldsymbol{\alpha}} + \boldsymbol{\beta})^{\lambda/\mu}, \qquad \text{where } (\vec{\boldsymbol{\alpha}} + \boldsymbol{\beta})^{\lambda/\mu} := \prod_{(i,j) \in \lambda / \mu} (\alpha_{i-1} + \pi_j) \end{align*}$$

that is defined to be $0$ if $\lambda \not \supseteq \mu $ , such that

(8.5) $$ \begin{align} C_{\lambda\mu} \cdot {}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]} \langle \mu \rvert e^{H({\mathbf{x}}_n)} \lvert \lambda \rangle^{[\boldsymbol{\alpha},\boldsymbol{\beta}]} = \mathsf{P}_{\mathcal{C},n}(\lambda|\mu) \quad \Longleftrightarrow \quad {}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]}\langle \mu \rvert e^{H({\mathbf{x}}_n)} = \sum_{\lambda \supseteq \mu} \frac{\mathsf{P}_{\mathcal{C},n}(\lambda|\mu)}{C_{\lambda\mu}} \cdot {}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]} \langle \lambda \rvert, \end{align} $$

where the equivalence of the two formulas is given by the orthonormality (2.6).

We now restrict ourselves to a single timestep at time i in order to encode the growth process as a particle process by using the operators $\mathbf {U}^{(\boldsymbol {\alpha },\boldsymbol {\beta })}$ . This incurs no loss of generality as $\mathsf {P}_{\mathcal {C},n+n'}(\lambda |\mu ) = \sum _{\nu } \mathsf {P}_{\mathcal {C},n}(\lambda |\nu ) \mathsf {P}_{\mathcal {C},n'}(\nu |\mu )$ by the branching rules (Proposition 2.3) and we have a Markov process. Recall the operator $\mathcal {T}_C$ from Section 4.2, and we define $\mathcal {T}_{\mathcal {C}}$ as $\mathcal {T}_C$ except using the operators $\mathbf {U}^{(\boldsymbol {\alpha },\boldsymbol {\beta })}$ . Since Theorem 3.2 holds for $\mathbf {U}^{(\boldsymbol {\alpha },\boldsymbol {\beta })}$ , we have

$$\begin{align*}{}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]}\langle \mu \rvert e^{H(x_i)} = \prod_{j=2}^{\infty} \frac{1}{1 - \pi_j x_i} \cdot {}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]} \langle \mathcal{T}_{\mathcal{C}} \cdot \mu \rvert \end{align*}$$

by the same argument in Section 4.2. Thus, if we consider the expansion

$$ \begin{align*} \langle \mathcal{T}_{\mathcal{C}} \cdot \mu \rvert = \sum_{\lambda} B_{\lambda\mu} \cdot {}^{[\boldsymbol{\alpha},\boldsymbol{\beta}]} \langle \lambda \rvert, \end{align*} $$

and matching coefficients in (8.5) (equivalently, pairing with $\lvert \lambda \rangle ^{[\boldsymbol {\alpha },\boldsymbol {\beta }]}$ ), we obtain

$$\begin{align*}\mathsf{P}_{\mathcal{C}}(\lambda|\mu) = \frac{B_{\lambda\mu}}{(\vec{\boldsymbol{\alpha}} + \boldsymbol{\beta})^{\lambda/\mu}} \prod_{j=1}^{\infty} (1 - \pi_j x_i)^{-1}. \end{align*}$$

Like at $\boldsymbol {\alpha } = 0$ , which is the Case C blocking behavior with geometric jumps, any individual (free) particle motion is (up to changing $\pi _j \mapsto \pi _1$ ) equivalent to the first particle’s motion. Thus, let us consider $\lambda $ with $\ell (\lambda ) = 1$ , and a straightforward computation (say, at time i) using either the operators $\mathbf {U}^{(\boldsymbol {\alpha },\boldsymbol {\beta })}$ or the combinatorial description of $G_{\lambda /\!\!/\mu }(x_i; \boldsymbol {\alpha }, \boldsymbol {\beta })$ yields

$$\begin{align*}\mathsf{P}_{\mathcal{C}}\big( m'| m \big) = \frac{1 - \pi_j x_i}{1 + \alpha_{m'+m} x_i} \prod_{k=m}^{m+m'-1} \frac{(\alpha_k + \pi_j) x_i}{1 + \alpha_k x_i}, \end{align*}$$

which is precisely the measure specified in (8.4). By (8.4), for any fixed m this is a probability measure for all $\alpha _k + \pi _j \geq 0$ with the natural assumptions $0 \leq \pi _j x_i < 1$ and $\alpha _k x_i \geq -1$ . This can also be extended to include generic parameters $(\alpha _k)_{k \in \mathbb {Z}}$ by shifting the parameters $\alpha _k \mapsto \alpha _{k\pm 1}$ . Therefore, we can perform the same analysis as in Section 4.2 to show the following.

Theorem 8.1. Suppose $\ell (\lambda ) \leq \ell $ , $\pi _j x_i \in (0, 1)$ , $\alpha _k x_i> -1$ , and $\alpha _k + \pi _j \geq 0$ for all $i, j, k$ . Set $\beta _j = \pi _{j+1}$ . Let $\mathsf {P}_{\mathcal {C},n}(\lambda |\mu )$ denote the n-step transition probability for the Case C particle system except using the distribution (8.2) for the jump probability of the particles, as given by (8.1). Then the n-step transition probability is given by

$$\begin{align*}\mathsf{P}_{\mathcal{C},n}(\lambda|\mu) = \prod_{i=1}^n (1 - \pi_1 x_i) (\vec{\boldsymbol{\alpha}} + \boldsymbol{\pi})^{\lambda/\mu} G_{\lambda/\!\!/\mu}({\mathbf{x}}_n; \boldsymbol{\alpha}, \boldsymbol{\beta}). \end{align*}$$

Remark 8.2. Since the $\boldsymbol {\alpha }$ parameters used, and hence the probabilities, now depend on the positions of the particles, we can only work with the bosonic model. Indeed, switching to the fermionic model will require us to introduce additional parameters $\alpha _k$ for $k < 0$ , in which case Theorem 8.1 no longer holds, or to account for the shifting of positions by replacing $\alpha _k \mapsto \alpha _{k+j}$ for the j-th particle distribution $\mathsf {P}_{\mathcal {G}}$ .

We could also prove Theorem 8.1 by using the combinatorics of hook-valued tableaux [Reference Hwang, Jang, Soo Kim, Song and SongHJK+24, Reference Hwang, Jang, Soo Kim, Song and SongHJK+25, Reference YeliussizovYel17] as in Section 5.3. The key observation is that we have a factor $x_i (1 - \alpha _k x_i)^{-1}$ for every box in the k-th column that would normally contain an i in the set-valued tableaux (over all k). In more detail, we take the minimal entries of each hook (the corner entry) in the tableau to describe the basic motion of the particles. The leg (the column part except for the corner) corresponds to the choice between $1$ and $-\pi _i x_j$ in the numerator of the normalization constant as before. The arm (the row part except for the corner) comes from waiting at that particular position and contributes an $-\alpha x_i$ , which contributes a factor of $(1 + \alpha x_i)^{-1}$ as in the Case B combinatorial proof. The associated combinatorics when $\boldsymbol {\beta } = \beta = -\alpha = -\boldsymbol {\alpha }$ , where no particles will move, was studied in [Reference YeliussizovYel17, Sec. 13.4].

From [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 4.1], we obtain determinant formulas for $\mathsf {P}_{\mathcal {C},n}(\lambda |\mu )$ , where we can write the entries of the matrix as contour integrals [Reference Iwao, Motegi and ScrimshawIMS24, Thm. 4.19]. We can also redo the computation in Theorem 6.8 at this level of generality to obtain a multipoint distribution for this process.

Theorem 8.3. The multipoint distribution for Case C inhomogeneous process with $\ell $ particles is given by

$$\begin{align*}\mathsf{P}_{\geq,n}(\nu|\mu) = \prod_{j=2}^{\ell} \prod_{i=1}^n (1-\pi_j x_i)^{-1} \det \big[ h_{\nu_i - \mu_j - i + j}\big({\mathbf{x}} /\!\!/ (A_{(\mu_j, \nu_i]} \sqcup \boldsymbol{\pi}_i / \boldsymbol{\beta}_j \big) \big]_{i,j=1}^{\ell}. \end{align*}$$

We can give another, more simple, proof for the case when $\boldsymbol {\alpha } = \alpha $ . This will follow from a straightforward generalization of the unrefined case [Reference YeliussizovYel17, Prop. 3.4], noting our sign convention means we need to substitute $-\alpha $ .

Proposition 8.4. We have

(8.6) $$ \begin{align} G_{\lambda}({\mathbf{x}}; \alpha, \boldsymbol{\beta}) = G_{\lambda}({\mathbf{x}}/(1 + \alpha {\mathbf{x}}); 0, \alpha + \boldsymbol{\beta}), \end{align} $$

where we substitute $x_i \mapsto x_i / (1 + \alpha x_i)$ and $\beta _i \mapsto \alpha + \beta _i$ .

Indeed, under this substitution, we have

(8.7) $$ \begin{align} \pi_j x_i \longmapsto \frac{(\alpha + \pi_j) x_i}{1 + \alpha x_i}. \end{align} $$

Hence, the geometric distribution $\mathsf {P}_{Ge}$ transforms to the distribution $\mathsf {P}_{\mathcal {G}}$ in (8.2) with $\boldsymbol {\alpha } = \alpha $ . Moreover, in our formula for $\mathsf {P}_{C,n}$ from Theorem 1.1, the total ${\mathbf {x}}$ degree and total $\boldsymbol {\pi }$ degree in each term of $\boldsymbol {\pi }^{\lambda /\mu } G_{\lambda /\!\!/ \mu }({\mathbf {x}}; \boldsymbol {\beta })$ are equal, and so we can perform the substitution (8.7). Thus, we obtain Theorem 8.1 in the case $\boldsymbol {\alpha } = \alpha $ .

Remark 8.5. Let us discuss the relationship between this model and the doubly geometric inhomogeneous corner growth model defined in [Reference Knizel, Petrov and SaenzKPS19]. In their corresponding TASEP model, there is an additional set of position-dependent parameters $\boldsymbol {\nu }$ that are only involved after the initial movement of the particle (akin to static friction). Yet, if we set $\boldsymbol {\nu } = 0$ , then the model in [Reference Knizel, Petrov and SaenzKPS19] is the fermionic realization of our model (cf. Remark 8.2) at $\boldsymbol {\beta } = 0$ with their parameters $(\mathbf {a}, \boldsymbol {\beta })$ equaling our parameters $(\boldsymbol {\alpha }, {\mathbf {x}})$ . Hence, we end up with another TASEP version that is equivalent to Case B. It would be interesting to see if the model in [Reference Knizel, Petrov and SaenzKPS19] can be recovered from the free fermionic description such as by using a specialization of the skew Cauchy identity.

We also remark that our model with $\boldsymbol {\pi } = 0$ was studied in [Reference AssiotisAss23], but using very different techniques based on Toeplitz matrices and Markov semigroups. Therefore, from the specialization of the canonical Grothendieck polynomials, it is essentially Case B as before, with a more probabilistic link being made by [Reference AssiotisAss23, Thm. 2.43].

We can similarly define a Bernoulli process extending Case B with the Bernoulli probability depending on the positions as

(8.8) $$ \begin{align} \mathsf{P}_{\mathcal{B}}(w_{ji} = 1 \mid G(j, i-1) = m) := \frac{(\rho_j + \beta_m) x_i}{1 + \rho_j x_i}. \end{align} $$

Analogously to Theorem 8.1 (including its proof), we have the following.

Theorem 8.6. Suppose $\lambda _1 \leq \ell $ , $\beta _k x_i \in (0, 1)$ , $\rho _j x_i> -1$ , and $\rho _j + \beta _k \geq 0$ for all $i, j, k$ . Set $\alpha _j = \rho _{j+1}$ . Let $\mathsf {P}_{\mathcal {B},n}(\lambda |\mu )$ denote the n-step transition probability for the Case B particle system except using the distribution (8.8) for the jump probability of the particles. Then the n-step transition probability is given by

$$\begin{align*}\mathsf{P}_{\mathcal{B},n}(\lambda|\mu) = \frac{(\vec{\beta} + \boldsymbol{\rho})^{\lambda/\mu}}{\prod_{i=1}^n (1 + \rho_1 x_i)} G_{\lambda'/\!\!/\mu'}({\mathbf{x}}_n; \boldsymbol{\alpha}, \boldsymbol{\beta}). \end{align*}$$

If we set $\boldsymbol {\alpha } = 0$ in this position-dependent version of Case B, then we end up with a Bernoulli random variable version of [Reference Knizel, Petrov and SaenzKPS19] at $\boldsymbol {\nu } = 0$ .

Next, we consider the analogous particle processes with pushing behavior, but we will only consider the geometric distribution case (the analog of Case A) as the Bernoulli case is entirely parallel. In this case, the transition probabilities are not given by the dual canonical Grothendieck polynomials as one would expect; this essentially comes from the fact that $(\alpha _k + \pi _j)^{-1} \neq \alpha _k^{-1} + \pi _j^{-1}$ (in general). Despite this, the combinatorial description of Case A in Section 5.1 defines a $\overline {g}_{\lambda /\mu }({\mathbf {x}}; \boldsymbol {\alpha }, \boldsymbol {\pi })$ as the sum over reverse plane partitions so the n-step transition probability satisfies

$$\begin{align*}\mathsf{P}_{\mathcal{A},n}(\lambda | \mu) = \prod_{j=1}^{\ell} \prod_{i=1}^n (1 - \pi_j x_i) (\vec{\boldsymbol{\alpha}} + \boldsymbol{\pi})^{\lambda/\mu} \overline{g}_{\lambda/\mu}({\mathbf{x}}_n; \boldsymbol{\alpha}, \boldsymbol{\pi}). \end{align*}$$

We note that $g_{\lambda /\mu }({\mathbf {x}}; \boldsymbol {\alpha }, \boldsymbol {\pi })$ can be defined as a sum over reverse plane partitions but with the weights now depending also on the $\boldsymbol {\alpha }$ parameters similar to [Reference YeliussizovYel17] (contrast this with [Reference Hwang, Jang, Soo Kim, Song and SongHJK+24, Reference Hwang, Jang, Soo Kim, Song and SongHJK+25]). For example, the weight $(\alpha _k + \pi _j)$ in $g_{\lambda /\mu }({\mathbf {x}}; \boldsymbol {\alpha }, \boldsymbol {\pi })$ would be replaced by $(\alpha _k + \pi _j)^{-1}$ in $\overline {g}_{\lambda /\mu }({\mathbf {x}}_n; \boldsymbol {\alpha }, \boldsymbol {\pi })$ . Therefore, we end up with new functions, but studying these functions is outside the scope of this paper. We also remark that $g_{\lambda /\mu }({\mathbf {x}}_n;\boldsymbol {\alpha }, \boldsymbol {\pi })$ does not appear in these transition probabilities is likely tied to the failure of $\mathbf {u}^{(\boldsymbol {\alpha },\boldsymbol {\beta })}$ to satisfy the Knuth relations.

The continuous limit version of this has also been studied in [Reference AssiotisAss20, Reference PetrovPet20], but as mentioned in the introduction, it is an open problem to go from our results to theirs.

9 Concluding remarks

Let us consider what would happen if we swapped the update rules. We consider the geometric distribution with smallest-to-largest updating first. In this case for the blocking behavior, the analysis is more subtle as the number of steps that the j-th particle can do depends not only on the position of the $(j-1)$ -th particle, but also on how many steps the $(j-1)$ -th particle takes. (Contrast this last part with the Bernoulli case, where we never have to consider this because each particle can move at most one step.) The pushing behavior also has the same difficulty added to the computations. As a result, we do not expect any nice formulas.

On the other hand, for the Bernoulli distribution with largest-to-smallest updating, the analysis is the same as for the geometric case except the particles can only move one step. If we consider the blocking behavior case, this agrees with the classical simultaneous update for discrete TASEP (which can be encoded by the LPP for Case A; see, e.g., [Reference JohanssonJoh01, Reference Motegi and ScrimshawMS25]). However, this might cause some slight complications for the blocking behavior as we have to now consider when particles can freely move, in contrast to the geometric case where they are always constrained (except for the largest particle). Despite this, a natural guess for the combinatorics would be to use suitably modified increasing tableaux and consider their generating functions.

Another construction to consider based on [Reference IwaoIwa23] is replacing $\lvert \lambda \rangle _{[\boldsymbol {\beta }]}$ and $\lvert \lambda \rangle ^{[\boldsymbol {\beta }]}$ by the vectors

$$ \begin{align*} \lvert \lambda \rangle_{\langle \boldsymbol{\beta} \rangle} & := \prod^{\rightarrow}_{1 \leq i \leq \ell} \left( \psi_{\lambda_i-i} e^{J(\beta_i)} \right) \lvert -\ell \rangle, & \lvert \lambda \rangle^{\langle \boldsymbol{\beta} \rangle} & := \prod^{\rightarrow}_{1 \leq i \leq \ell} \left( \psi_{\lambda_i-i} e^{-J^*(\beta_i)} \right) \lvert -\ell \rangle, \end{align*} $$

respectively. Then we use these vectors (and their $*$ versions) to encode such dynamics of a particle system. It would be interesting to see what properties the resulting functions

$$\begin{align*}k_{\lambda/\mu}({\mathbf{x}}_n; \boldsymbol{\beta}) := {}_{\langle \boldsymbol{\beta} \rangle} \langle \mu \rvert e^{H({\mathbf{x}}_n)} \lvert \lambda \rangle_{\langle \boldsymbol{\beta} \rangle}, \qquad\qquad K_{\lambda /\!\!/ \mu}({\mathbf{x}}_n; \boldsymbol{\beta}) := {}^{\langle \boldsymbol{\beta} \rangle} \langle \mu \rvert e^{H({\mathbf{x}}_n)} \lvert \lambda \rangle^{\langle \boldsymbol{\beta} \rangle}, \end{align*}$$

have compared with (dual) Grothendieck polynomials. Note that at $\boldsymbol {\beta } = 0$ , these reduce to a (skew) Schur function, so they cannot encode the Bernoulli distribution with largest-to-smallest updating since the first particle can only move one step from the step initial condition $\mu = \emptyset $ .

Next, we compute an alternative form of our vector $\lvert \lambda \rangle ^{[\boldsymbol {\beta }]}$ . From [Reference Motegi and ScrimshawMS25, Thm. 5.3], we can write a Grothendieck polynomial as a multiSchur function of [Reference LascouxLas03]

(9.1) $$ \begin{align} G_{\lambda}({\mathbf{x}}_n; \boldsymbol{\beta}) = (-1)^{\binom{n}{2}} \boldsymbol{\beta}^{\rho_n} s_{\widetilde{\lambda}}({\mathbf{x}}_n, {\mathbf{x}}_n/\boldsymbol{\beta}_1^{-1}, \dotsc, {\mathbf{x}}_n/\boldsymbol{\beta}_{n-1}^{-1}), \end{align} $$

where $\rho _n = (n-1, n-2, \dotsc , 1, 0)$ and $\widetilde {\lambda } = (\lambda _1, \lambda _2 + 1, \dotsc , \lambda _n + n - 1)$ . The precise definition of a multiSchur function is not needed as we will immediately use [Reference IwaoIwa23] to write Equation (9.1) in terms of free fermions:

$$ \begin{align*} G_{\lambda}({\mathbf{x}}_n; \boldsymbol{\beta}) & = (-1)^{\binom{n}{2}} \boldsymbol{\beta}^{\rho_n} \langle \emptyset \rvert e^{H({\mathbf{x}}_n)} \psi_{\lambda_1-1} e^{-H(\beta_1^{-1})} \psi_{\lambda_2-1} e^{-H(\beta_2^{-1})} \dotsm \psi_{\lambda_n-1} e^{-H(\beta_{n-1}^{-1})} \lvert -n \rangle \\ & = (-1)^{\binom{n}{2}} \boldsymbol{\beta}^{\rho_n} \langle \emptyset \rvert e^{H({\mathbf{x}}_n)} \lvert \widetilde{\lambda} \rangle_{[\emptyset/\boldsymbol{\beta}^{-1}]}. \end{align*} $$

Therefore, the orthonormality (2.6) implies

(9.2) $$ \begin{align} \lvert \lambda \rangle^{[\boldsymbol{\beta}]} = (-1)^{\binom{n}{2}} \boldsymbol{\beta}^{\rho_n} \lvert \widetilde{\lambda} \rangle_{[\emptyset/\boldsymbol{\beta}^{-1}]} = (-\boldsymbol{\beta})^{\rho_n} \lvert \widetilde{\lambda} \rangle_{(-\boldsymbol{\beta}^{-1})}. \end{align} $$

By applying Wick’s theorem, we recover [Reference Motegi and ScrimshawMS25, Thm. 5.12], which refines [Reference KirillovKir16, Thm. 1.10]. Using the expressions in (9.2), it could be possible to derive some new additional formulas involving $G_{\lambda /\!\!/ \mu }({\mathbf {x}}_n; \boldsymbol {\beta })$ and $G_{\lambda / \mu }({\mathbf {x}}_n; \boldsymbol {\beta })$ . For example, a multipoint distribution formula with the partitions contained with $\lambda $ (as opposed to containing $\lambda $ in Theorem 6.8) by using a modification of Proposition 6.1.

Acknowledgments

The authors thank Guillaume Barraquand, Dan Betea, A. B. Dieker, Darij Grinberg, Yuchen Liao, Jang Soo Kim, Leonid Petrov, Mustazee Rahman, Tomohiro Sasamoto, Jon Warren, Damir Yeliussizov, Paul Zinn-Justin, and Nikos Zygouras for valuable conversations. The authors thank Theo Assiotis for letting us know of his recent preprint and explanations of his results. The authors thank the referee for their comments.

This work benefited from computations using SageMath [Sag22, SCc08]. This work was partly supported by Osaka City University Advanced Mathematical Institute (MEXT Joint Usage/Research Center on Mathematics and Theoretical Physics JPMXP0619217849). This work was supported by the Research Institute for Mathematical Sciences, an International Joint Usage/Research Center located in Kyoto University.

Competing interests

The authors have no competing interest to declare.

Financial support

S.I. was partially supported by Grant-in-Aid for Scientific Research (C) 19K03605, 22K03239, 23K03056. K.M. was partially supported by Grant-in-Aid for Scientific Research (C) 21K03176, 20K03793. T.S. was partially supported by Grant-in-Aid for JSPS Fellows 21F51028 and for Scientific Research for Early-Career Scientists 23K12983.

Data availability statement

This manuscript has no associated datasets.

Footnotes

1 We henceforth drop the word “symmetric” for simplicity as we will not consider the “nonsymmetric” Grothendieck polynomials coming from the K-theory of the complete flag variety, which are analogous to Schubert polynomials.

2 This should be called the refined canonical Grothendieck polynomials following [Reference Hwang, Jang, Soo Kim, Song and SongHJK+24, Reference Hwang, Jang, Soo Kim, Song and SongHJK+25] as they are a refinement of those introduced by Yeliussizov [Reference YeliussizovYel17], we dropped the word “refined” to simplify our nomenclature.

3 Typically these are called the symmetric Grothendieck functions to distinguish these from those $\mathfrak {G}_w$ that arise from the (connective) K-theory of the flag variety, which depend on a permutation w. However, since we do not use $\mathfrak {G}_w$ here, we omit the word “symmetric” from our terminology.

4 We have omitted the normal ordering as we will not consider the current operator $a_0$ in this text. See, for example, [Reference Alexandrov and ZabrodinAZ13, Sec. 2] and [Reference Miwa, Jimbo and DateMJD00, Sec. 5.2] for more details.

References

Ayyer, A., Goldstein, S., Lebowitz, J. L., and Speer, E. R.. ‘Stationary states of the one-dimensional facilitated asymmetric exclusion process’, Ann. Inst. Henri Poincaré Probab. Stat. 59(2) (2023), 726742.10.1214/22-AIHP1264CrossRefGoogle Scholar
Ayyer, A., Mandelshtam, O., and Martin, J. B., ‘Modified Macdonald polynomials and the multispecies zero-range process: I’, Algebr. Comb. 6(1) (2023), 243284.Google Scholar
Ayyer, A., Mandelshtam, O., and Martin, J. B., ‘Modified Macdonald polynomials and the multispecies zero range process: II’, Math. Z. 308(2) (2024), Paper No. 31, 45.10.1007/s00209-024-03548-yCrossRefGoogle Scholar
Ayyer, A. and Nadeau, P., ‘Combinatorics of a disordered two-species ASEP on a torus’, European J. Combin. 103 (2022), Paper No. 103511, 20.10.1016/j.ejc.2022.103511CrossRefGoogle Scholar
Assiotis, T., ‘Determinantal structures in space-inhomogeneous dynamics on interlacing arrays’, Ann. Henri Poincaré. 21(3) (2020), 909940.10.1007/s00023-019-00881-5CrossRefGoogle ScholarPubMed
Assiotis, T., ‘On some integrable models in inhomogeneous space’, Preprint, 2023, arXiv:2310.18055.Google Scholar
Amanov, A. and Yeliussizov, D., ‘Determinantal formulas for dual Grothendieck polynomials’, Proc. Amer. Math. Soc. 150(10) (2022), 41134128.Google Scholar
Alexandrov, A. and Zabrodin, A., ‘Free fermions and tau-functions’, J. Geom. Phys. 67 (2013), 3780.10.1016/j.geomphys.2013.01.007CrossRefGoogle Scholar
Borodin, A. and Bufetov, A., ‘Color-position symmetry in interacting particle systems’, Ann. Probab. 49(4) (2021), 16071632.10.1214/20-AOP1463CrossRefGoogle Scholar
Borodin, A. and Ferrari, P. L, ‘Anisotropic growth of random surfaces in 2+1 dimensions’, Comm. Math. Phys. 325 (2014), 603684.10.1007/s00220-013-1823-xCrossRefGoogle Scholar
Borodin, A., Ferrari, P. L., Prähofer, M., and Sasamoto, T., ‘Fluctuation properties of the TASEP with periodic initial configuration’, J. Stat. Phys. 129(5–6) (2007), 10551080.10.1007/s10955-007-9383-0CrossRefGoogle Scholar
Borodin, A., Ferrari, P. L., and Sasamoto, T., ‘Large time asymptotics of growth models on space-like paths. II. PNG and parallel TASEP’, Comm. Math. Phys. 283(2) (2008), 417449.10.1007/s00220-008-0515-4CrossRefGoogle Scholar
Bisi, E., Liao, Y., Saenz, A., and Zygouras, N., ‘Non-intersecting path constructions for TASEP with inhomogeneous rates and the KPZ fixed point’, Comm. Math. Phys. 402(1) (2023), 285333.10.1007/s00220-023-04723-8CrossRefGoogle ScholarPubMed
Borel, A., ‘Sur la cohomologie des éspaces fibrés principaux et des éspaces homogènes de groupes de Lie compacts’, Ann. of Math. (2). 57(1) (1953), 115207.10.2307/1969728CrossRefGoogle Scholar
Buciumas, V., Scrimshaw, T., and Weber, K., ‘Colored five-vertex models and Lascoux polynomials and atoms’, J. Lond. Math. Soc. 102(3) (2020), 10471066.10.1112/jlms.12347CrossRefGoogle Scholar
Skovsted Buch, A., ‘A Littlewood–Richardson rule for the $K$ -theory of Grassmannians’, Acta Math. 189(1) (2002), 3778.10.1007/BF02392644CrossRefGoogle Scholar
Chou, T. and Lohse, D., ‘Entropy-driven pumping in zeolites and biological channels’, Phys. Rev. Lett. 82(17) (1999), 35523555.10.1103/PhysRevLett.82.3552CrossRefGoogle Scholar
Corwin, I., Matveev, K., and Petrov, L., ‘The $q$ -Hahn PushTASEP’, Int. Math. Res. Not. IMRN. 2021(3) (2021), 22102249.10.1093/imrn/rnz106CrossRefGoogle Scholar
Corteel, S., Mandelshtam, O., and Williams, L., ‘From multiline queues to Macdonald polynomials via the exclusion process’, Amer. J. Math. 144(2) (2022), 395436.10.1353/ajm.2022.0007CrossRefGoogle Scholar
Chan, M. and Pflueger, N., ‘Combinatorial relations on skew Schur and skew stable Grothendieck polynomials’, Algebraic Combin. 4(1) (2021), 175188.10.5802/alco.144CrossRefGoogle Scholar
Chowdhury, D., Santen, L., and Schadschneider, A., ‘Statistical physics of vehicular traffic and some related systems’, Phys. Rep. 329(4–6) (2000), 199329.10.1016/S0370-1573(99)00117-9CrossRefGoogle Scholar
Cantini, L. and Zahra, A., ‘Hydrodynamic behavior of the two-TASEP’, J. Phys. A. 55(30) (2022), Paper No. 305201, 20.10.1088/1751-8121/ac79e3CrossRefGoogle Scholar
Dieker, A. B. and Warren, J., ‘Determinantal transition kernels for some interacting particles on the line’, Ann. Inst. Henri Poincaré Probab. Stat. 44(6) (2008), 11621172.10.1214/07-AIHP176CrossRefGoogle Scholar
Fomin, S. and Greene, C., ‘Noncommutative Schur functions and their applications’, Discrete Math. 193(1–3) (1998), 179200. Selected papers in honor of Adriano Garsia (Taormina, 1994).10.1016/S0012-365X(98)00140-XCrossRefGoogle Scholar
Fujii, T., Nobukawa, T., and Shimazaki, T., ‘The number of set-valued tableaux is odd’, Preprint, 2023, arXiv:2305.06740.Google Scholar
Fomin, S., ‘Schur operators and Knuth correspondences’, J. Combin. Theory Ser. A. 72(2) (1995), 277292.10.1016/0097-3165(95)90065-9CrossRefGoogle Scholar
Galashin, P., Grinberg, D., and Liu, G., ‘Refined dual stable Grothendieck polynomials and generalized Bender–Knuth involutions’, Electron. J. Combin. 23(3) (2016), Paper 3.14, 28.10.37236/5737CrossRefGoogle Scholar
Gorbounov, V. and Korff, C., ‘Quantum integrability and generalised quantum Schubert calculus’, Adv. Math. 313 (2017), 282356.10.1016/j.aim.2017.03.030CrossRefGoogle Scholar
Gavrilova, S. and Petrov, L., ‘Tilted biorthogonal ensembles, Grothendieck random partitions, and determinantal tests’, Selecta Math. (N.S.). 30(3) (2024), Paper No. 56, 51.10.1007/s00029-024-00945-3CrossRefGoogle Scholar
Hwang, B.-H., Jang, J., Soo Kim, J., Song, M., and Song, U.-K., ‘Refined canonical stable Grothendieck polynomials and their duals, Part 1’, Adv. Math. 446 (2024), Paper No. 109670, 42.10.1016/j.aim.2024.109670CrossRefGoogle Scholar
Hwang, B.-H., Jang, J., Soo Kim, J., Song, M., and Song, U.-K., ‘Refined canonical stable Grothendieck polynomials and their duals, Part 2’, European J. Combin. 127 (2025), Paper No. 104166, 34.10.1016/j.ejc.2025.104166CrossRefGoogle Scholar
Hawkes, G. and Scrimshaw, T., ‘Crystal structures for canonical Grothendieck functions’, Algebraic Combin. 3(3) (2020), 727755.10.5802/alco.111CrossRefGoogle Scholar
Iwao, S., Motegi, K., and Scrimshaw, T., ‘Free fermionic probability theory and K-theoretic Schubert calculus’, Preprint, 2023, arXiv:2311.01116.Google Scholar
Iwao, S., Motegi, K., and Scrimshaw, T., ‘Free fermions and canonical Grothendieck polynomials’, Algebr. Comb. 7(1) (2024), 245274.Google Scholar
Iwao, S., ‘Grothendieck polynomials and the boson-fermion correspondence’, Algebraic Combin. 3(5) (2020), 10231040.10.5802/alco.116CrossRefGoogle Scholar
Iwao, S., ‘Free-fermions and skew stable Grothendieck polynomials’, J. Algebraic Combin. 56(2) (2022), 493526.10.1007/s10801-022-01121-6CrossRefGoogle Scholar
Iwao, S., ‘Free fermions and Schur expansions of multi-Schur functions’, J. Combin. Theory Ser. A. 198 (2023), Paper No. 105767, 23.10.1016/j.jcta.2023.105767CrossRefGoogle Scholar
Johansson, K., ‘Shape fluctuations and random matrices’, Comm. Math. Phys. 209(2) (2000), 437476.10.1007/s002200050027CrossRefGoogle Scholar
Johansson, K., ‘Discrete orthogonal polynomial ensembles and the Plancherel measure’, Ann. of Math. (2). 153(1) (2001), 259296.10.2307/2661375CrossRefGoogle Scholar
Johansson, K., ‘A multi-dimensional Markov chain and the Meixner ensemble’, Ark. Mat. 48(1) (2010), 7995.10.1007/s11512-008-0089-6CrossRefGoogle Scholar
Johansson, K. and Rahman, M., ‘On inhomogeneous polynuclear growth’, Ann. Probab. 50(2) (2022), 559590.10.1214/21-AOP1540CrossRefGoogle Scholar
Kac, V. G., Infinite-Dimensional Lie Algebras, third edition (Cambridge University Press, Cambridge, 1990).10.1017/CBO9780511626234CrossRefGoogle Scholar
Kim, J. S., ‘Jacobi–Trudi formula for refined dual stable Grothendieck polynomials’, J. Combin. Theory Ser. A. 180 (2021), Paper No. 105415, 33.10.1016/j.jcta.2021.105415CrossRefGoogle Scholar
Kim, J. S., ‘Jacobi–Trudi formulas for flagged refined dual stable Grothendieck polynomials’, Algebr. Comb. 5(1) (2022), 121148.Google Scholar
Kirillov, A. N., ‘On some quadratic algebras I $\frac{1}{2}$ : combinatorics of Dunkl and Gaudin elements, Schubert, Grothendieck, Fuss-Catalan, universal Tutte and reduced polynomials. SIGMA Symmetry Integr. Geom. Methods Appl. 12 (2016), Paper No. 002, 172.Google Scholar
Knizel, A., Petrov, L., and Saenz, A., ‘Generalizations of TASEP in discrete and continuous inhomogeneous space’, Comm. Math. Phys. 372(3) (2019), 797864.10.1007/s00220-019-03495-4CrossRefGoogle Scholar
Kac, V. G., Raina, A. K., and Rozhkovskaya, N., Bombay Lectures on Highest Weight Representations of Infinite Dimensional Lie Algebras, vol. 29 of Advanced Series in Mathematical Physics, second edition (World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2013).10.1142/8882CrossRefGoogle Scholar
Kim, D. and Williams, L. K., ‘Schubert polynomials, the inhomogeneous TASEP, and evil-avoiding permutations’, Int. Math. Res. Not. IMRN. 2023(10) (2023), 81438211.10.1093/imrn/rnac083CrossRefGoogle Scholar
Lascoux, A., Symmetric Functions and Combinatorial Operators on Polynomials, vol. 99 of CBMS Regional Conference Series in Mathematics. Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 2003.10.1090/cbms/099CrossRefGoogle Scholar
Lenart, C., ‘Combinatorial aspects of the $K$ -theory of Grassmannians’, Ann. Comb. 4(1) (2000), 6782.10.1007/PL00001276CrossRefGoogle Scholar
Lam, T. and Pylyavskyy, P., ‘Combinatorial Hopf algebras and $K$ -homology of Grassmannians’, Int. Math. Res. Not. IMRN. 2007(24) (2007), Art. ID rnm125, 48.10.1093/imrn/rnm132CrossRefGoogle Scholar
Loehr, N. A. and Remmel, J. B., ‘A computational and combinatorial exposé of plethystic calculus’, J. Algebraic Combin. 33(2) (2011), 163198.10.1007/s10801-010-0238-4CrossRefGoogle Scholar
Lascoux, A. and Schützenberger, M.-P., ‘Structure de Hopf de l’anneau de cohomologie et de l’anneau de Grothendieck d’une variété de drapeaux’, C. R. Acad. Sci. Paris Sér. I Math. 295(11) (1982), 629633.Google Scholar
Lascoux, A. and Schützenberger, M.-P., ‘Symmetry and flag manifolds’, in Invariant Theory (Montecatini, 1982), vol. 996 of Lecture Notes in Mathematics (Springer, Berlin, 1983), 118144.10.1007/BFb0063238CrossRefGoogle Scholar
Macdonald, I. G., Symmetric Functions and Hall Polynomials. Oxford Classic Texts in the Physical Sciences, second edition (The Clarendon Press, Oxford University Press, New York, 2015). With contribution by A.V. Zelevinsky and a foreword by Richard Stanley, Reprint of the 2008 paperback edition.Google Scholar
MacDonald, C. T., Gibbs, J. H., and Pipkin, A. C., ‘Kinetics of biopolymerization on nuclear acid templates’, Biopolymers. 6 (1968), 125.10.1002/bip.1968.360060102CrossRefGoogle Scholar
Miwa, T., Jimbo, M., and Date, E., Solitons, vol. 135 of Cambridge Tracts in Mathematics (Cambridge University Press, Cambridge, 2000). Differential equations, symmetries and infinite-dimensional algebras, Translated from the 1993 Japanese original by Miles Reid.Google Scholar
Matetski, K. and Remenik, D., ‘Exact solution of TASEP and variants with inhomogeneous speeds and memory lengths’, Preprint, 2023, arXiv:2301.13739.Google Scholar
Matetski, K. and Remenik, D., ‘TASEP and generalizations: method for exact solution’, Probab. Theory Related Fields. 185(1–2) (2023), 615698.10.1007/s00440-022-01129-wCrossRefGoogle Scholar
Motegi, K. and Sakai, K., ‘Vertex models, TASEP and Grothendieck polynomials’, J. Phys. A. 46(35) (2013), 355201, 26.10.1088/1751-8113/46/35/355201CrossRefGoogle Scholar
Motegi, K. and Sakai, K., ‘ $K$ -theoretic boson-fermion correspondence and melting crystals’, J. Phys. A. 47(44) (2014), 445202.10.1088/1751-8113/47/44/445202CrossRefGoogle Scholar
Motegi, K. and Scrimshaw, T., ‘Refined dual Grothendieck polynomials, integrability, and the Schur measure’, Selecta Math. (N.S.). 31(3) (2025), Paper No. 43, 70.10.1007/s00029-025-01041-wCrossRefGoogle Scholar
Okounkov, A., ‘Infinite wedge and random partitions’, Selecta Math. (N.S.). 7(1) (2001), 5781.10.1007/PL00001398CrossRefGoogle Scholar
Okounkov, A. and Reshetikhin, N., ‘Correlation function of Schur process with application to local geometry of a random 3-dimensional Young diagram’, J. Amer. Math. Soc. 16(3) (2003), 581603.10.1090/S0894-0347-03-00425-9CrossRefGoogle Scholar
Petrov, L., ‘PushTASEP in inhomogeneous space’, Electron. J. Probab. 25 (2020), Paper No. 114, 25.10.1214/20-EJP517CrossRefGoogle Scholar
Patrias, R. and Pylyavskyy, P., ‘Combinatorics of $K$ -theory via a $K$ -theoretic Poirier–Reutenauer bialgebra’, Discrete Math. 339(3) (2016), 10951115.10.1016/j.disc.2015.10.044CrossRefGoogle Scholar
Pan, J., Pappe, J., Poh, W., and Schilling, A., ‘Uncrowding algorithm for hook-valued tableaux’, Ann. Comb. 26(1) (2022), 261301.10.1007/s00026-022-00567-6CrossRefGoogle Scholar
Petrov, L. and Saenz, A., ‘Mapping TASEP back in time’, Probab. Theory Relat. Fields. 182(1–2) (2022), 481530.10.1007/s00440-021-01074-0CrossRefGoogle Scholar
Quastel, J. and Sarkar, S., ‘Convergence of exclusion processes and the KPZ equation to the KPZ fixed point’, J. Amer. Math. Soc. 36(1) (2023), 251289.10.1090/jams/999CrossRefGoogle Scholar
Rákos, A. and Schütz, G. M., ‘Bethe ansatz and current distribution for the TASEP with particle-dependent hopping rates’, Markov Process. Relat. Fields. 12(2) (2006), 323334.Google Scholar
The Sage Developers. Sage Mathematics Software (Version 9.7), 2022. https://www.sagemath.org.Google Scholar
The Sage-Combinat community. Sage-Combinat: enhancing Sage as a toolbox for computer exploration in algebraic combinatorics, 2008. https://combinat.sagemath.org.Google Scholar
Stanley, R. P.. Enumerative Combinatorics. Vol. 2, vol. 62 of Cambridge Studies in Advanced Mathematics (Cambridge University Press, Cambridge, 1999). With a foreword by Gian-Carlo Rota and appendix 1 by Sergey Fomin.10.1017/CBO9780511609589CrossRefGoogle Scholar
Takigiku, M., ‘Automorphisms on the ring of symmetric functions and stable and dual stable Grothendieck polynomials’, Preprint, 2018, arXiv:1808.02251.Google Scholar
Takigiku, M., ‘On the Pieri rules of stable and dual stable Grothendieck polynomials’, Preprint, 2018, arXiv:1806.06369.Google Scholar
Takigiku, M., ‘A Pieri formula and a factorization formula for sums of $K$ -theoretic $k$ -Schur functions’, Algebr. Comb. 2(4) (2019), 447480.Google Scholar
Warren, J. and Windridge, P., ‘Some examples of dynamics for Gelfand-Tsetlin patterns’, Electron. J. Probab. 14(59) 2009, 17451769.10.1214/EJP.v14-682CrossRefGoogle Scholar
Wheeler, M. and Zinn-Justin, P., ‘Littlewood–Richardson coefficients for Grothendieck polynomials from integrability’, J. Reine Angew. Math. 757 (2019), 159195.10.1515/crelle-2017-0033CrossRefGoogle Scholar
Yeliussizov, D., ‘Duality and deformations of stable Grothendieck polynomials’, J. Algebraic Combin. 45(1) (2017), 295344.10.1007/s10801-016-0708-4CrossRefGoogle Scholar
Yeliussizov, D., ‘Symmetric Grothendieck polynomials, skew Cauchy identities, and dual filtered Young graphs’, J. Combin. Theory Ser. A. 161 (2019), 453485.10.1016/j.jcta.2018.09.006CrossRefGoogle Scholar
Yeliussizov, D., ‘Dual Grothendieck polynomials via last-passage percolation’, C. R. Math. Acad. Sci. Paris. 358(4) (2020), 497503.10.5802/crmath.67CrossRefGoogle Scholar
Figure 0

Table 1 The relationship between our sign choices and some other papers in the literature.

Figure 1

Table 2 Summary of the four cases of discrete TASEP that we consider in this paper.

Figure 2

Figure 1 Examples of the third particle making a jump of $6$ steps with the pushing (left) and blocking (right) behaviors.

Figure 3

Figure 2 Samples of the continuous time limit TASEP with $\ell = 500$ particles at time $t = 500$ with rate $\boldsymbol {\pi } = 1$ under the blocking (left) and pushing (right) behavior.

Figure 4

Figure 3 Samples of TASEP with $\ell = 500$ particles with $\boldsymbol {\pi } = 1$ and ${\mathbf {x}} = p = 0.01$ with $n = \lfloor 500 / p \rfloor $ under the blocking behavior (left) and pushing behavior (right).

Figure 5

Figure 4 A sampling using $10000$ samples of the modified geometric distribution $\mathsf {P}_{\mathcal {G}}$ for $x_i = 1$, $\pi _j = .5$, and $\alpha _k = 1 - k e^{-k/2}$ (blue) under the exact distribution (red), which is under the geometric distribution (green).

Figure 6

Figure 5 Samples of blocking TASEP with $\ell = 500$ particles after $n = 50000$ time steps with (left) $\boldsymbol {\pi } = 1$, ${\mathbf {x}} = 0.01$, and $\boldsymbol {\alpha } = -0.5$; (right) $\boldsymbol {\pi } = 0.5$, ${\mathbf {x}} = .2$, and $\alpha _k = 0.5 \sin (k/50)^6$.