Hostname: page-component-cb9f654ff-5jtmz Total loading time: 0 Render date: 2025-09-05T10:56:45.298Z Has data issue: false hasContentIssue false

The algebraic structure of Dyson–Schwinger equations with multiple insertion places

Published online by Cambridge University Press:  29 August 2025

Nicholas Olson-Harris
Affiliation:
Department of Combinatorics and Optimization, https://ror.org/01aff2v68 University of Waterloo , Waterloo, ON N2L 3G1, Canada e-mail: nsolsonharris@uwaterloo.ca
Karen Yeats*
Affiliation:
Department of Combinatorics and Optimization, https://ror.org/01aff2v68 University of Waterloo , Waterloo, ON N2L 3G1, Canada e-mail: nsolsonharris@uwaterloo.ca

Abstract

We give combinatorially controlled series solutions to Dyson–Schwinger equations with multiple insertion places using tubings of rooted trees and investigate the algebraic relation between such solutions and the renormalization group equation.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Canadian Mathematical Society

1 Introduction

We explain the algebraic structure of Dyson–Schwinger equations (DSEs) with multiple insertion places, giving nice, combinatorially controlled expansions for the solutions of multiple insertion place DSEs in terms of tubings of rooted trees with both vertex and edge decorations. Along the way, we give an algebraic formulation of the renormalization group equation and resolve a conjecture from [Reference Nabergall26]. The results are purely algebraic and combinatorial, some of independent interest for researchers in combinatorial Hopf algebras and related areas. Many of these results first appeared in the Ph.D. thesis of one of us [Reference Olson-Harris27].

The reader who is not concerned with the physics motivation for these results can skip the rest of this section as well as the first half of Section 3.1.

To calculate physical amplitudes in perturbative quantum field theory it suffices to understand the renormalized one-particle irreducible (1PI) Green functions in the theory. As series, one can obtain the 1PI Green functions by applying renormalized Feynman rules to sums of Feynman graphs. Choosing an external scale parameter L we can think of the Green functions as multivariate series $G(x, L, \theta ),$ where x is the perturbative expansion parameter, for us the coupling constant, and the $\theta $ are dimensionless parameters capturing the remaining kinematic dependence. DSEs are coupled integral equations relating the Green functions, the quantum analogs of the equations of motion.

One of us has a long-standing program to find simple combinatorial understandings of the series solutions to DSEs. These first took the form of chord diagram expansions [Reference Courtiel and Yeats14Reference Courtiel, Yeats and Zeilberger16, Reference Hihn and Yeats22, Reference Marie and Yeats25] and have been recently improved with the new language of tubing expansions by both of us with other authors [Reference Balduf, Cantwell, Ebrahimi-Fard, Nabergall, Olson-Harris and Yeats4].

These combinatorial solutions to DSEs stand in contrast to the Feynman diagram expansions of the Green functions because rather than each Feynman diagram contributing a difficult and algebraically opaque object (its renormalized Feynman integral) each chord diagram or tubing contributes some monomials in the coefficients of the Mellin transforms of the primitive diagrams which drive the DSEs and these monomials can be readily read off combinatorially from the chord diagram or tubing. The tubing expansion has additional benefits: Its origin is algebraic and insightful, coming from the renormalization Hopf algebra and explaining how the original Feynman diagrams map to tubings. It readily generalizes beyond the cases previously solved by chord diagrams [Reference Balduf, Cantwell, Ebrahimi-Fard, Nabergall, Olson-Harris and Yeats4]. It provides insightful structure for exploring resurgence and non-perturbative properties [Reference Borinsky, Dunne and Yeats8].

All the work so far in this direction has been in the single scale case, which is best exemplified by Green functions for propagator corrections. In this case, we take $L = \log q^2/\mu ^2,$ where q is the momentum flowing through and $\mu $ is the reference scale for renormalization. In this case, there are no further parameters $\theta $ .

Prior to the present work, all the work in this direction had a further restriction that only one internal scale of the diagram was considered, that is, we were only working with a single insertion place. This restriction could be circumvented in the past, so long as all insertions symmetrically were desired, by working with a symmetric insertion place (see [Reference Yeats35, Section 2.3.3]), but this is not a satisfying resolution either physically or mathematically since it does not let us understand the interplay of the different insertion places or give us any control over them. Herein, we resolve this by solving all single scale DSEs, allowing multiple distinguished insertion places, using generalized tubing expansions.

Throughout, $\mathbb {K}$ is the underlying field, which we take to be of characteristic 0 since this is the case of physical interest.

Specifically, we will be looking to find series solutions $G(x,L)$ to DSEs. For $\mu \in \mathbb {K}$ , the simplest case our DSEs look like

(1.1) $$ \begin{align} G(x, L) = 1 + \left.xG\left(x, \frac{\partial}{\partial \rho}\right)^\mu (e^{L\rho}-1)F(\rho)\right|_{\rho=0,} \end{align} $$

which determines $G(x,L)$ recursively in terms of the coefficients of $F(\rho ) = \sum _{i\geq 0}a_i\rho ^{i-1}$ . Equivalently, and in the form we’ll prefer here

$$\begin{align*}G(x, L) = 1 + x \Lambda(G(x, L)^{\mu}), \end{align*}$$

where $\Lambda $ is a 1-cocycle given by the $a_i$ , as will be explained below. This simple case was solved by a chord diagram expansion [Reference Marie and Yeats25] for negative integer $\mu $ , and from there more general forms of DSE solved [Reference Hihn and Yeats22] and further generalizations along with the more conceptual formulation in terms of tubings given [Reference Balduf, Cantwell, Ebrahimi-Fard, Nabergall, Olson-Harris and Yeats4]. Distinguished insertion places, however, remained open until the present work. The most general system of DSEs that we solve is given in (3.3) and our solution to it in Theorem 3.17.

The work to give this solution will be principally Hopf algebraic, working on the Connes–Kreimer Hopf algebra of rooted trees and decorated generalizations thereof. In particular, the tubing expansion of (3.3) will be constructed in two steps, first understanding and solving similar but simpler equations purely at the level of trees (these are often called combinatorial DSEs) and explicitly understanding the form of the map from the Connes–Kreimer Hopf algebra to the polynomial Hopf algebra which comes from the universal property of the Connes–Kreimer Hopf algebra.

The article proceeds as follows. We will begin by giving background, defining the Connes–Kreimer Hopf algebra and the combinatorial analogs of our DSEs in Section 2.1, then defining the Faá di Bruno Hopf algebra and looking in more detail at 1-cocycles in Section 2.2. In Section 2.3, we look at the close relationship between the renormalization group equation and the Riordan group, a relationship which has been underrecognized up to this point. We finally define the DSEs in Section 2.4 and give a clean, conceptual way of understanding how the invariant charge relates to the renormalization group equation via our work on the Riordan group. The introduction is then rounded out by Section 2.5, where we consider the anomalous dimension and Section 2.6, where we define the tubings that we use for our solutions to DSEs and state the previous results in that direction from [Reference Balduf, Cantwell, Ebrahimi-Fard, Nabergall, Olson-Harris and Yeats4].

We then move to the new work on multiple insertion places, giving the physical context and mathematical set up in Section 3.1, extending our discussion of 1-cocycles to tensor powers of a bialgebra in Section 3.2, generalizing the results regarding the invariant charge and the renormalization group equation in Section 3.3, including proving a conjecture of Nabergall [Reference Nabergall26]. In Section 3.4, we consider situations, where DSEs with multiple insertion places can be transformed into single insertion place DSEs, first disproving a different conjecture of Nabergall [Reference Nabergall26] and then focusing on the special case in which the are in an appropriate sense linear. Our discussion of multiple insertion places culminates in Section 3.5, where we show how to give combinatorial series solutions to multiple insertion places DSEs via tubings, hence giving combinatorial solutions to all single scale DSEs. We conclude in Section 4.

2 Background

2.1 The Connes–Kreimer Hopf algebra of rooted trees

The central algebraic structure for us is the Connes–Kreimer Hopf algebra and its variants.

It will be convenient to think of rooted trees first of all as posets, so define a rooted forest to be a finite poset such that each element is covered by at most one element and define a connected forest to be a rooted tree. Since we only work with rooted trees and forests, we will simply say tree and forest whenever convenient.

A rooted tree can be decomposed as a unique maximal element, the root, together with a forest. For a rooted tree t we denote by root by $\operatorname {rt} t$ . On the other hand, a forest uniquely decomposes as a disjoint union of trees. A disjoint union of forests is a forest and any upset or downset in a forest is a forest.

When convenient, we will also use graph theoretic or tree-specific terminology for forests and trees. In particular, we often refer to elements of trees as vertices and covering relations as edges. The unique vertex covering a non-root vertex is its parent, vertices covered by a vertex are its children. We will think of a tree as oriented downwards (opposite the order) so that the number of children of a vertex is the outdegree and is denoted $\operatorname {od}(v)$ .

We define the (undecorated) Connes–Kreimer Hopf algebra, which we denote $\mathcal H$ , as follows. As an algebra $\mathcal H$ is the vector space freely generated by isomorphism classes of forests with multiplication given by disjoint union. (Equivalently, $\mathcal H$ is the commutative algebra freely generated by isomorphism classes of (nonempty) trees). We make $\mathcal H$ into a bialgebra by the coproduct

$$\begin{align*}\Delta(F) = \sum_{D \text{ downset of } F}D \otimes (F\setminus D) \end{align*}$$

for a forest F and extended linearly. (This is equivalent to other ways of presenting this coproduct using admissible cuts [Reference Connes and Kreimer13] or antichains of vertices [Reference Yeats36] which have appeared in the literature.) We grade $\mathcal H$ by the number of vertices making $\mathcal H$ into a graded connectedFootnote 1 bialgebra and hence a Hopf algebra.

Note that following the same construction with all posets in place of forests, we obtain the standard downset/upset Hopf algebra of posets, $\mathcal {P}$ .

We can also characterize $\mathcal H$ in a more algebraic way. Consider the linear operator $B_{+}$ on $\mathcal {P}$ which sends each poset P to the poset obtained by adjoining a new element larger than all elements of P. Then, $\mathcal H$ is the unique minimal subalgebra of $\mathcal {P}$ which is mapped to itself by $B_{+}$ . From this perspective, it is not immediately obvious that $\mathcal H$ should be a Hopf subalgebra. We can understand this by considering the relationship between $B_{+}$ and the coproduct. Note that the only downset of $B_{+}P$ which contains the new element is the entirety of $B_{+}P$ . The other downsets coincide with the downsets of P, and if D is such a downset we have $(B_{+}P) \setminus D \cong B_{+}(P \setminus D)$ . It follows that

$$\begin{align*}\Delta B_{+}P = B_{+}P \otimes 1 + \sum_{D \text{ downset of } P}D \otimes B_{+}(P\setminus D) \end{align*}$$

or in other words

(2.1) $$ \begin{align} \Delta B_{+} = B_{+} \otimes 1 + (\mathrm{id} \otimes B_{+})\Delta. \end{align} $$

An operator satisfying (2.1) is a 1-cocycle in the cohomology theory that we will discuss in more detail in Section 2.2.

It will be useful to work in a more general setting. Given a set I, we define the decorated Connes–Kreimer Hopf algebra $\mathcal H_I$ as follows. By an I-tree (resp. I-forest), we mean a tree (resp. forest) with each vertex decorated by an element of I. We will write $\mathcal T(I)$ for the set of I-trees and $\mathcal F(I)$ for the set of I-forests. Then, $\mathcal H_I$ is the free vector space on $\mathcal F(I)$ , made into a bialgebra with disjoint union as multiplication and the same coproduct as in $\mathcal H$ but preserving the decorations on all vertices. As usual, we can grade by the number of vertices and find that $\mathcal H_I$ is a connected graded bialgebra and hence a Hopf algebra.

Remark 2.1 We could instead choose some weight function $w\colon I \to \mathbb {Z}_{\geq 0}$ and grade $\mathcal H_I$ by total weight. If w takes only positive values then this grading will also make $\mathcal H_I$ connected. In the application to DSEs, we will have such a weight function already given by the loop order of the primitive Feynman diagram corresponding to the 1-cocycle and so it is natural to grade the algebra this way, but it won’t really matter for anything we do.

For each $i \in I$ , we have an operator $B_{+}^{(i)}$ on $\mathcal H_I$ that sends an I-forest to the I-tree obtained by adding a root with decoration i. For the same combinatorial reasons as the usual $B_{+}$ operator on $\mathcal H$ , all of these are 1-cocycles.

The key significance of the Connes–Kreimer Hopf algebra is that it possesses a universal property with respect to 1-cocycles ([Reference Connes and Kreimer13, Theorem 2]). The decorated Connes–Kreimer Hopf algebra likewise has a universal property as follows.

Theorem 2.2 [Reference Foissy20, Propositions 2 and 3]

Let A be a commutative algebra and $\{\Lambda _i\}_{i \in I}$ be a family of linear operators on A. There exists a unique algebra morphism $\phi \colon \mathcal H_I \to A $ such that $\phi B_{+}^{(i)} = \Lambda _i \phi $ . Moreover, if A is a bialgebra and $\Lambda _i$ is a 1-cocycle for each i then $\phi $ is a bialgebra morphism.

Note that there is nothing mysterious about the map $\phi $ guaranteed by Theorem 2.2. Since any decorated tree t can be written uniquely (up to reordering) in the form $t = B_{+}^{(i)}(t_1 \cdots t_k)$ for some $t_1, \ldots , t_k$ and with i the decoration of the root we can and must define $\phi $ recursively by

(2.2) $$ \begin{align} \phi(t) = \Lambda_i(\phi(t_1) \cdots \phi(t_k)). \end{align} $$

A natural question is whether we can find an explicit, non-recursive formula for $\phi $ . Without knowing anything about A and the $\Lambda _i$ there is clearly nothing we can do, but the central insight of [Reference Balduf, Cantwell, Ebrahimi-Fard, Nabergall, Olson-Harris and Yeats4] and the present article is that in the cases that come up in DSEs we can solve this problem in a nice, combinatorial way.

Already in this context, we can build classes of trees using equations involving $B_{+}$ . These equations have the same recursive structure as the DSEs we will ultimately be working on but live strictly in the world of decorated Connes–Kreimer. They are often called combinatorial DSEs [Reference Yeats35].

Let P be a set (finite or infinite) and assign each $p \in P$ a weight $w_p \in \mathbb {Z}_{\geq 1}$ , such that there are only finitely many elements of each weight, and an insertion exponent $\mu _p \in \mathbb {K}$ .Footnote 2 Then, we have the single equation combinatorial DSE

(2.3) $$ \begin{align} T(x) = 1 + \sum_{p \in P} x^{w_p} B_{+}^{(p)}(T(x)^{\mu_p}). \end{align} $$

The solution to this equation is essentially due to Bergbauer and Kreimer [Reference Bergbauer, Kreimer and Nyssen6] although our statement is somewhat more general. To state it, we need some more notation. For a vertex v with decoration p, we will write $w(v) = w_p$ and $\mu (v) = \mu _p$ . We will write

$$\begin{align*}w(t) = \sum_{v \in t} w(v). \end{align*}$$

Finally, by an automorphism of a decorated tree we mean an automorphism of the underlying tree (as a poset) which preserves the decorations, and as one would expect we denote the automorphism group of t by $\mathrm {Aut}(t)$ .

Proposition 2.3 [Reference Bergbauer, Kreimer and Nyssen6, Lemma 4]

The unique solution to (2.3) is

(2.4) $$ \begin{align} T(x) = 1 + \sum_{t \in \mathcal T(P)} \left(\prod_{v \in t} \mu(v)^{\underline{\operatorname{od}(v)}} \right) \frac{tx^{w(t)}}{|\mathrm{Aut}(t)|}. \end{align} $$

Here, we use the underline notation for falling factorial powers

$$\begin{align*}a^{\underline{k}} = \prod_{j=0}^{k-1}(a-j). \end{align*}$$

Proof Let $T(x)$ be given by (2.4); we will show that $T(x)$ satisfies (2.3).

We introduce some notation. For a forest $f \in \mathcal F(P),$ write $\kappa (f)$ for the number of connected components. Write $\mathcal F_k(P)$ for the set of forests $f \in \mathcal F(P)$ with $\kappa (f) = k$ . Write $\tilde T(x) = T(x) - 1$ , so $\tilde T(x)$ is a kind of exponential generating function for P-trees. By the compositional formula (see, e.g., [Reference Stanley31, Theorem 5.5.4]), divided powers count forests:

$$\begin{align*}\frac{\tilde T(x)^k}{k!} = \sum_{f \in \mathcal F_k(P)} \left(\prod_{v \in f} \mu(v)^{\underline{\operatorname{od}(v)}} \right) \frac{fx^{w(f)}}{|\mathrm{Aut}(f)|}. \end{align*}$$

Then, by the binomial series expansion, for any $u \in \mathbb {K}$ we have

$$ \begin{align*} T(x)^u &= (1 + \tilde T(x))^u \\ &= \sum_{k \ge 0} \binom{u}{k} \tilde T(x)^k \\ &= \sum_{f \in \mathcal F(P)} u^{\underline{\kappa(f)}} \left(\prod_{v \in f} \mu(v)^{\underline{\operatorname{od}(v)}} \right) \frac{fx^{w(f)}}{|\mathrm{Aut}(f)|}. \end{align*} $$

Now, any tree $t \in \mathcal T(P)$ can be uniquely written as $t = B_{+}^{(p)}f$ for some $p \in P$ and $f \in \mathcal F(P)$ . In this case, we have $w(t) = w(f) + w_p$ . We have $\mathrm {Aut}(t) \cong \mathrm {Aut}(f)$ and the outdegrees of all non-root vertices are the same in t as in f. The outdegree of the root is $\kappa (f)$ . Using this bijection, we get

$$ \begin{align*} T(x) &= 1 + \sum_{p \in P} \sum_{f \in \mathcal F(P)} \mu_p^{\underline{\kappa(f)}} \left(\prod_{v \in f} \mu(v)^{\underline{\operatorname{od}(v)}} \right) \frac{(B_{+}^{(p)}f)x^{w(f) + w_p}}{|\mathrm{Aut}(f)|} \\ &= 1 + \sum_{p \in P} x^{w_p} B_{+}^{(p)}(T(x)^{\mu_p}) \end{align*} $$

as desired.

Example 2.4 Suppose we have just a single cocycle (so we are essentially in the undecorated Connes–Kreimer Hopf algebra $\mathcal H$ ) and a nonnegative integer insertion exponent k, so (2.4) becomes

$$\begin{align*}T(x) = 1 + xB_{+}(T(x)^k). \end{align*}$$

If we ignore the $B_{+}$ , this would simply give the ordinary generating function for k-ary trees (in the computer scientists’ sense, where the children of each vertex are totally ordered including the “missing” ones). With the $B_{+}$ included, it is, therefore, still a generating function for k-ary trees but one in which the contribution of each tree is the underlying tree itself as an element of $\mathcal H$ . It is not too hard to show that $\left (\prod _{v \in t} k^{\underline {\operatorname {od}(v)}}\right )/|\mathrm {Aut}(t)|$ counts the number of ways to make a tree t into a k-ary tree, so this agrees with (2.4).

Of particular note is the case $k = 1$ , the linear DSE. This produces unary trees, which in the context of quantum field theory are usually referred to as ladders. (Thinking of trees as posets these are chains; thinking of them as graphs they are paths with the root as one of the endpoints.)

Example 2.5 Again consider a single cocycle, but now with insertion exponent $-1$ . The equation

$$\begin{align*}T(x) = 1 + xB_{+}(T(x)^{-1}) \end{align*}$$

can be rewritten in terms of $\tilde T(x) = T(x) - 1$ as

$$\begin{align*}\tilde T(x) = xB_{+}\left(\frac{1}{1 + \tilde T(x)}\right). \end{align*}$$

If the plus sign were replaced by a minus, this would give a generating function for plane trees. With the plus, it gives plane trees with a sign corresponding to the number of edges. On the other hand, noting that $(-1)^{\underline d} = (-1)^d d!$ , we have that the contribution of t to the right side of (2.4) is $(-1)^{|t|-1} \left (\prod _{v \in t} \operatorname {od}(v)!\right )/|\mathrm {Aut}(t)|$ which up to the sign is the number of ways to make t into a plane tree, so this also matches the formula.

A similar story holds for systems of combinatorial DSEs. The setup here is the same, but we partition our indexing set P into $\{P_i\}_{i \in I}$ for some finite set I which will index the equations in the system. Each $p \in P$ is still assigned a simple weight $w_p \in \mathbb {Z}_{\geq 1}$ but the insertion exponent is now an insertion exponent vector $\mu _p \in \mathbb {K}^I$ . The system of combinatorial DSEs is

(2.5) $$ \begin{align} T_i(x) = 1 + \sum_{p \in P_i} x^{w_p} B_{+}^{(p)}(\mathbf T(x)^{\mu_p}). \end{align} $$

As with the single-equation case, we first need to solve this combinatorial system. This solution first appears in [Reference Balduf, Cantwell, Ebrahimi-Fard, Nabergall, Olson-Harris and Yeats4], though it is a straightforward generalization of Proposition 2.3. Again, [Reference Balduf, Cantwell, Ebrahimi-Fard, Nabergall, Olson-Harris and Yeats4] only considers a special case, where the insertion exponents are constrained but the same proof works in the more general setup and in fact allowing the insertion exponents to be arbitrary allows the formula to be much cleaner. We need some additional notation: let us write $\mathcal T_i(P)$ for the subset of $\mathcal T(P)$ for which the root has a decoration in $P_i$ ; clearly these are the trees that can contribute to $T_i(x)$ . For a vertex v let $\operatorname {od}_i(v)$ be the number of children which have their decoration in $P_i$ ; we collect all of these together to form the outdegree vector $\operatorname {\mathbf {od}}(v) \in \mathbb {Z}_{\geq 0}^I$ .

Theorem 2.6 [Reference Balduf, Cantwell, Ebrahimi-Fard, Nabergall, Olson-Harris and Yeats4, Theorem 4.7]

The unique solution to the system (2.5) is

$$\begin{align*}T_i(x) = 1 + \sum_{t \in \mathcal T_i(P)} \left(\prod_{v \in t} \mu(v)^{\underline{\operatorname{\mathbf{od}}(v)}} \right) \frac{tx^{w(t)}}{|\mathrm{Aut}(t)|}. \end{align*}$$

Proof Analogous to Proposition 2.3.

2.2 Hopf algebras of polynomials and power series

The next step to bring us closer to the DSEs of physics is to understand what will be the target space for our Feynman rules, where our Green functions will live, and relatedly, to what A we will apply the universal property Theorem 2.2.

We make the polynomial algebra $\mathbb {K}[L]$ into a (graded) bialgebra by taking the coproduct and counit as the unique algebra morphisms extending $\Delta L = 1 \otimes L + L \otimes 1$ and $\epsilon (L) = 0$ . This too is a Hopf algebra, with antipode $S(f (L)) = f (-L)$ . It is often profitable to think of this coproduct in a different way. We can identify $\mathbb {K}[L] \otimes \mathbb {K}[L]$ with $\mathbb {K}[L_1, L_2]$ (where $L^j \otimes L^k$ corresponds to $L_1^jL_2^k$ ). The coproduct then simply corresponds to the map $f (L) \mapsto f (L_1 + L_2)$ .

We will need some language and standard results on infinitesimal characters. An infinitesimal character of a bialgebra H is a map $\sigma : H \to \mathbb {K}$ satisfying

$$\begin{align*}\sigma(ab) = \sigma(a)\epsilon(b) + \epsilon(a)\sigma(b), \end{align*}$$

or in other words a derivation of H into the trivial H-module $\mathbb {K}$ . We summarize some useful results for us on infinitesimal characters in the following theorem.

Theorem 2.7 Let H be a graded connected Hopf algebra and $\phi : H \rightarrow \mathbb {K}[L]$ be an algebra morphism. Then, $\operatorname {lin} \phi $ is an infinitesimal character if and only if $\phi |_{L=0} = \epsilon $ , where $\operatorname {lin}$ is the map extracting the linear coefficient. Moreover, the following are equivalent:

  1. (i) $\phi $ is a bialgebra morphism,

  2. (ii) $\phi = \exp _{*}(L \operatorname {lin} \phi )$ ,

  3. (iii) $\phi |_{L=0} = \epsilon $ and $\frac {d}{dL} \phi = (\operatorname {lin} \phi ) * \phi $ ,

  4. (iv) $\phi |_{L=0} = \epsilon $ and $\frac {d}{dL} \phi = \phi * (\operatorname {lin} \phi )$ .

The proof of this theorem consists of standard calculations and can also be found in [Reference Olson-Harris27, Section 2.2.4].

Another important Hopf algebra is the Faá di Bruno Hopf algebra.

Let $\widetilde {\mathfrak {D}} \subset \mathbb {K}[[x]]$ be the set of all series with zero constant term and nonzero linear term. These are also known as formal diffeomorphisms and form a group with respect to composition of series. Let ${\mathfrak {D}}$ be the subgroup consisting of series with linear term x; these are sometimes known as $\delta $ -series. It turns out that ${\mathfrak {D}}$ is essentially isomorphic to the character group of a graded Hopf algebra, the Faá di Bruno Hopf algebra $\mathsf {FdB}$ .

As an algebra, $\mathsf {FdB}$ should be thought of as the algebra of polynomial functions on ${\mathfrak {D}}$ . Explicitly, it is the polynomial algebra $\mathbb {K}[\pi _1, \pi _2, \dots ]$ in an $\mathbb {Z}_{\geq 1}$ -indexed set of variables. We organize these variables into a power series

$$\begin{align*}\Pi(x) = x + \sum_{n \ge 1} \pi_nx^{n+1}. \end{align*}$$

Then, the map from the character group $\operatorname {Ch}(\mathsf {FdB}) \to {\mathfrak {D}}$ given by $\zeta \mapsto \zeta (\Pi (x))$ is clearly a bijection. (Note that here and throughout notation like $\zeta (\Pi (x))$ implicitly means applying $\zeta $ coefficientwise.) We define a coproduct

(2.6) $$ \begin{align} \Delta \pi_n = \sum_{k=0}^n [x^{n+1}] \Pi(x)^{k+1} \otimes \pi_k. \end{align} $$

(where $\pi _0 = 1$ ). Observe that this makes $\mathsf {FdB}$ into a connected graded bialgebra if we define $\pi _n$ to have degree n; this is the reason for the off-by-one in the definition. The following proposition is essentially immediate from (2.6).

Proposition 2.8 Let A be a commutative algebra and $\phi , \psi \colon \mathsf {FdB} \to A$ be algebra morphisms. Let $\Phi (x) = \phi (\Pi (x))$ and $\Psi (x) = \psi (\Pi (x))$ . Then, $(\phi * \psi )(\Pi (x)) = \Psi (\Phi (x))$ .

In particular, this actually implies that the map $\operatorname {Ch}(\mathsf {FdB}) \to {\mathfrak {D}}$ described above is an anti-isomorphism of groups. Clearly, we could have defined the coproduct with the tensor factors flipped in order to make it an isomorphism, but the way we have defined it is both traditional and will turn out to be convenient for our purposes.

Comodules over a coalgebra become modules over the dual. Let C be a coalgebra and M be a left C-comodule with coaction $\delta $ . For $\alpha $ an element in the dual ( $\alpha \in C^*$ ), define $m \mathbin {\leftharpoonup } \alpha = (\alpha \otimes \mathrm {id}_M )\delta (m)$ . We can compute

$$ \begin{align*} (m \mathbin{\leftharpoonup} \alpha) \mathbin{\leftharpoonup} \beta & = (\beta \otimes \mathrm{id}_M )\delta((\alpha \otimes id_M )\delta(m)) \\ & = (\beta \otimes \mathrm{id}_M )(\alpha \otimes \delta)\delta(m) \\ & = (\alpha \otimes \beta \otimes \mathrm{id}_M )(\mathrm{id}_C \otimes \delta)\delta(m) \\ & = (\alpha \otimes \beta \otimes \mathrm{id}_M )(\Delta_C \otimes \mathrm{id}_M )\delta(m) \\ & = ((\alpha * \beta) \otimes \mathrm{id}_M )\delta(m) \\ & = m \mathbin{\leftharpoonup} (\alpha * \beta) \end{align*} $$

so $\mathbin {\leftharpoonup }$ makes M into a right module over $C^*$ . Analogously, if M is a right C-comodule then we define $\alpha \mathbin {\rightharpoonup } m = (\mathrm {id}_M \otimes \alpha )\delta (m)$ and this makes M into a left $C^*$ -module. In particular, C is both a left and right comodule over itself and hence both a left and right module over $C^*$ .

Applying this to $\mathsf {FdB}^*$ and observing that it is sufficient to understand how an element of $\mathsf {FdB}^*$ acts on the generators, or equivalently on the series $\Pi (x)$ itself, we obtain the following result directly from (2.6).

Proposition 2.9 Suppose $\phi \in \mathsf {FdB}^*$ and let $\Phi (x) = \phi (\Pi (x))$ . Then:

  1. (i) $\phi \mathbin {\rightharpoonup } \Pi (x) = \Phi (\Pi (x))$ .

  2. (ii) If $\phi \in \operatorname {Ch}(\mathsf {FdB})$ then $\Pi (x) \mathbin {\leftharpoonup } \phi = \Pi (\Phi (x))$ .

  3. (iii) If $\phi \in \mathop {\mathfrak {ch}}(\mathsf {FdB})$ then $\Pi (x) \mathbin {\leftharpoonup } \phi = \Phi (x)\Pi '(x)$ , where $\mathop {\mathfrak {ch}}(\mathsf {FdB})$ is the Lie algebra of infinitesimal characters.

As a particular consequence of Item 2.9 (iii), we get a nice description of the Lie algebra $\mathop {\mathfrak {ch}}(\mathsf {FdB})$ : the map $\phi \mapsto \phi (\Pi (x))\frac {d}{dx}$ gives a faithful representation by differential operators on $\mathbb {K}[[x]]$ . We can also combine this with Theorem 2.7 to characterize bialgebra morphisms $\mathsf {FdB} \to \mathbb {K}[L]$ .

Theorem 2.10 Let $\phi \colon \mathsf {FdB} \to \mathbb {K}[L]$ be an algebra morphism and let $\Phi (x, L) = \phi (\Pi (x))$ . Let $\beta (x)$ be the linear term in L of $\Phi (x, L)$ . Then, $\phi $ is a bialgebra morphism if and only if $\Phi (x, 0) = x$ and

(2.7) $$ \begin{align} \frac{\partial \Phi(x, L)}{\partial L} = \beta(x)\frac{\partial \Phi(x, L)}{\partial x}. \end{align} $$

Proof Recall the notation and results of Item 2.9 (iii). We know that $\phi $ is a bialgebra morphism if and only if $\phi |_{L=0} = \varepsilon $ and $\frac {d}{dL}\phi = (\operatorname {lin} \phi ) * \phi $ . Since $\phi $ is an algebra morphism, its behaviour is determined by what it does to the generators, so these are, respectively, equivalent to $\Phi (x, 0) = \varepsilon (\Pi (x)) = x$ and

$$\begin{align*}\frac{\partial \Phi(x, L)}{\partial L} = ((\operatorname{lin} \phi) * \phi)(\Pi(x)) = \phi(\Pi(x) \mathbin{\leftharpoonup} \operatorname{lin}(\phi)). \end{align*}$$

Since $\operatorname {lin} \phi $ is an infinitesimal character, by Item 2.9 (iii) the right-hand side is $\beta (x) \frac {\partial \Phi (x,L)}{\partial x}$ as wanted.

Example 2.11 For example, take $\phi \colon \mathsf {FdB} \to \mathbb {K}[L]$ defined on generators by $\phi (\pi _n) = L^n$ and extended as an algebra morphism. Then, $\Phi (x,L) = \sum _{n=0}^{\infty } L^nx^{n+1} = \frac {x}{1-Lx}$ and $\beta (x) = x^2$ . We can now check the conditions from Theorem 2.10: we have $\Phi (x,0) = x$ and $\frac {\partial \Phi (x,L)}{\partial L} = x^2\sum _{n=1}^\infty nL^{n-1}x^{n-1} = \beta (x)\frac {\partial \Phi (x,L)}{\partial x}$ , hence $\phi $ is a bialgebra morphism.

We mentioned in Section 2.1 that the $B_{+}$ and $B_{+}^{(i)}$ operators on rooted trees and decorated rooted trees are 1-cocycles. We will also need to understand 1-cocycles on other bialgebras, so let us now discuss in more detail.

Let H be a bialgebra and M a left comodule over H, with coaction $\delta $ . For $k \ge 0$ , a k-cochain on M is a linear map $M \to H^{\otimes k}$ . Denote the vector space of k-cochains by $\mathrm {C}^k(H, M)$ . The coboundary map $d_k\colon \mathrm {C}^k(H, M) \to \mathrm {C}^{k+1}(H, M)$ is defined by

$$\begin{align*}d_k \Lambda = (\mathrm{id}_H \otimes \Lambda)\delta + \sum_{j=1}^k (-1)^j (\mathrm{id}_H^{\otimes(j-1)} \otimes \Delta \otimes \mathrm{id}_H^{\otimes(k-j)}) \Lambda + (-1)^k \Lambda \otimes 1. \end{align*}$$

The kernel and image of this map are the spaces of k-cocycles and $(k+1)$ -coboundaries, respectively. The space of k-cocycles is denoted $\mathrm {Z}^k(H, M)$ . A tedious but routine calculation shows that $d_{k+1}d_k = 0$ , so every coboundary is a cocycle. The quotient $\mathrm {H}^k(H, M) = \mathrm {Z}^k(H, M)/d_{k-1}\mathrm {C}^{k-1}(H, M)$ is the kth cohomology of the comodule M. Most often considered is the case $M = H$ , in which case we simply write $\mathrm {Z}^k(H)$ and $\mathrm {H}^k(H)$ .

Note that $\mathrm {H}^k(H, M)$ is $\mathrm {Ext}_H^k(M, H)$ in the category of comodules over H. The original notion of cohomology for coalgebras introduced by Doi [Reference Doi17] worked with a bicomodule with a left coaction and a right coaction. Our definition is the special case, where the right coaction is trivial.

We will only be interested in the case $k = 1$ . Moreover, for the remainder of this section we will focus on the case $M = H$ . (We will consider some other comodules in Section 3.2.) In this case, the cocycle condition $d_1\Lambda = 0$ can be written

$$\begin{align*}\Delta \Lambda = \Lambda \otimes 1 + (\mathrm{id}_H \otimes \Lambda)\Delta \end{align*}$$

which is the form we saw for $B_{+}$ . Here, is another very natural example.

Example 2.12 Let $\mathcal {I}$ be the integration operator on $\mathbb {K}[L]$ :

$$\begin{align*}\mathcal{I} f(L) = \int_0^L f(u)\,du. \end{align*}$$

Recall that the coproduct on $\mathbb {K}[L]$ can be interpreted as substituting a sum $L_1 + L_2$ in place of the variable L. That $\mathcal {I}$ is a 1-cocycle then boils down to some familiar properties of integrals:

$$ \begin{align*} \int_0^{L_1 + L_2} f(u)\,du &= \int_0^{L_1} f(u)\,du + \int_{L_1}^{L_1 + L_2} f(u)\,du \\ &= \int_0^{L_1} f(u)\,du + \int_0^{L_2} f(L_1 + u)\,du. \end{align*} $$

By the universal property, $\mathcal {I}$ defines a morphism $\phi \colon \mathcal H \to \mathbb {K}[L]$ . An easy induction with the recurrence (2.2) gives

$$\begin{align*}\phi(t) = \frac{L^{|t|}}{\prod_{v \in t} |t_v|}, \end{align*}$$

where $t_v$ denotes the subtree (principal downset) rooted at v. The denominator is also known as the tree factorial. A formula of Knuth [Reference Knuth23, Section 5.1, Exercise 20] gives the number of linear extensions of a tree as

$$\begin{align*}e(t) = \frac{|t|!}{\prod_{v \in t} |t_v|} \end{align*}$$

and hence we can alternatively write

$$\begin{align*}\phi(t) = \frac{e(t)L^{|t|}}{|t|!}. \end{align*}$$

This latter formula is the simplest special case of the formula we will derive for arbitrary 1-cocycles on $\mathbb {K}[L]$ in the context of the tubing expansion.

We state some basic properties of 1-cocycles. For stating these it is convenient to generalize the notion of convolution to maps defined on comodules: for an algebra A and maps $\alpha \colon H \to A$ and $\beta \colon M \to A$ , write

$$\begin{align*}\alpha *_\delta \beta = m_A(\alpha \otimes \beta)\delta, \end{align*}$$

where $m_A$ is the multiplication map on A.

Lemma 2.13 Let M be a comodule over H and $\Lambda \in \mathrm {Z}^1(H, M)$ . Then:

  1. (i) If $\alpha , \beta \colon H \to A$ for some algebra A then $(\alpha * \beta )\Lambda = \beta (1) \alpha \Lambda + \alpha *_\delta \beta \Lambda $ .

  2. (ii) $\varepsilon \Lambda = 0$ .

  3. (iii) If $\phi \colon N \to M$ is a homomorphism of comodules then $\Lambda \phi \in \mathrm {Z}^1(H, N)$ .

Proof

  1. (i) Immediate from the definition of 1-cocycles and convolution.

  2. (ii) Follows from (i) since $\varepsilon $ is the identity for convolution.

  3. (iii) Write $\delta $ and $\delta '$ for the coactions on M and $N,$ respectively. We have

    $$\begin{align*}\Delta \Lambda \phi = (\Lambda \otimes 1)\phi + (\mathrm{id} \otimes \Lambda)\delta \phi = \Lambda\phi \otimes 1 + (\mathrm{id} \otimes \Lambda \phi) \delta'.\\[-34pt] \end{align*}$$

Note, in particular, that if $\beta (1) = 0$ (e.g., if $\beta $ is an infinitesimal character), then (i) just says $(\alpha * \beta )\Lambda = \alpha * \beta \Lambda $ .

Suppose $\Lambda \in \mathrm {Z}^1(H)$ . We can use $\Lambda $ to build new cocycles on various comodules. Given a left comodule M with coaction $\delta $ and a linear map $\psi \colon M \to \mathbb {K}$ , we define $\Lambda \circledast \psi = (\Lambda \otimes \psi )\delta $ .

Lemma 2.14 Subject to the above assumptions, $\Lambda \circledast \psi \in \mathrm {Z}^1(H, M)$ .

Proof We compute

$$ \begin{align*} \Delta (\Lambda \circledast \psi) &= (\Delta\Lambda \otimes \psi) \delta \\ &= (\Lambda \otimes \psi)\delta \otimes 1 + ((\mathrm{id} \otimes \Lambda)\Delta \otimes \psi) \delta \\ &= (\Lambda \circledast \psi) \otimes 1 + (\mathrm{id} \otimes \Lambda \otimes \psi)(\Delta \otimes \mathrm{id})\delta \\ &= (\Lambda \circledast \psi) \otimes 1 + (\mathrm{id} \otimes \Lambda \otimes \psi)(\mathrm{id} \otimes \delta)\delta \\ &= (\Lambda \circledast \psi) \otimes 1 + (\mathrm{id} \otimes (\Lambda \circledast \psi))\delta.\\[-34pt] \end{align*} $$

As a special case of this, note that $d\varepsilon \circledast \psi = d\psi $ (dropping the subscript to the coboundary map by the usual abuse of notation). When, $M = H$ we can write $\Lambda \circledast \psi $ using the left action of $H^*$ on H described above:

$$\begin{align*}(\Lambda \circledast \psi)h = \Lambda(\psi \mathbin{\rightharpoonup} h). \end{align*}$$

Using this operation and the integral cocycle from Example 2.12, we can describe all 1-cocycles on $\mathbb {K}[L]$ .

Theorem 2.15 (Panzer [Reference Panzer28, Theorem 2.6.4])

For any series $A(L) \in \mathbb {K}[[L]]$ , the operator

(2.8) $$ \begin{align} f(L) \mapsto \int_0^L A(d/du) f(u) \,du \end{align} $$

is a 1-cocycle on $\mathbb {K}[L]$ . Moreover, all 1-cocycles on $\mathbb {K}[L]$ are of this form.

Corollary 2.16 The cohomology $\mathrm {H}^1(\mathbb {K}[L])$ is one-dimensional and generated by the class of the integral cocycle $\mathcal {I}$ .

Proof Note $\mathcal {I}(1) = L$ , so $\mathcal {I}$ is not a coboundary. Now suppose $\Lambda $ is a 1-cocycle given by (2.8). Write $A(L) = a_0 + LB(L)$ for some series $B(L)$ . Then, we have

$$ \begin{align*} \Lambda f(L) &= \int_0^L A(d/du) f(u)\,du \\ &= a_0 \int_0^L f(u)\,du + \int_0^L \frac{d}{du}B(d/du)f(u)\,du \\ &= a_0 \mathcal{I} f(L) + B(d/dL)f(L) - B(d/dL)f(L)\big|_{L=0} \end{align*} $$

hence $\Lambda = a_0\mathcal {I} + d\beta ,$ where $\beta $ is the linear form $L^n \mapsto [L^n]B(L)$ .

Remark 2.17 We can also write 1-cocycles on $\mathbb {K}[L]$ in a different form, namely,

$$\begin{align*}f(L) \mapsto f(\partial/\partial \rho) \frac{e^{L\rho} - 1}{\rho}A(\rho)\big|_{\rho = 0}. \end{align*}$$

Checking on the basis of monomials, quickly reveals that this is equivalent to the 1-cocycle that appears in the statement of Theorem 2.15. Operators of this form are often used by one of us in formulating DSEs (see for instance [Reference Hihn and Yeats22, Reference Marie and Yeats25, Reference Yeats35]).

In view of the comments above, it will turn out that the key to solving DSEs combinatorially will be in determining an explicit formula for the map $\mathcal H \to \mathbb {K}[L]$ induced by $\Lambda $ , in terms of the coefficients of the series $A(L)$ .

This set up generalizes immediately to the case of the decorated Connes–Kreimer Hopf algebra $\mathcal H_I$ and the 1-cocycles $B_{+}^{(i)}$ for $i\in I$ .

2.3 The renormalization group equation and the Riordan group

Let $\beta (x)$ and $\gamma (x)$ be formal power series, with $\beta (0) = 0$ . The renormalization group equation (RGE) (or Callan–Symanzik equation) is

(2.9) $$ \begin{align} \left(\frac{\partial}{\partial L} - \beta(x) \frac{\partial}{\partial x} - \gamma(x)\right)G(x, L) = 0. \end{align} $$

As suggested by the notation, we will ultimately want to think of this $G(x, L)$ as the same one which appears in the DSE, but for the purposes of this section, we can consider it to be simply notation for the (potential) solution to this PDE.

Remark 2.18 In the quantum field theory context, the renormalization group equation (2.9) is a very important differential equation because it describes how the Green function changes as the energy scale L changes.

The series $\beta (x)$ is the beta function of the quantum field theory. Thinking for a moment not in terms of formal power series, but in the physical context with functions, then $\beta $ should be the L derivative of the coupling x. Returning to the series context, this becomes essentially the linear term of what we will soon call the invariant charge, Q (see (2.19)). The beta function is important physically because it describes how the coupling (which determines the strength of interactions) changes with the energy. Zeros of the beta function are particularly important since the situation simplifies at such point.

The series $\gamma (x)$ is the anomalous dimension. It is a dimension in the sense of scaling dimension, and it is anomalous in the sense that in the classical setting one would have a constant integer in place of $\gamma (x)$ . A particularly easy case is when $\beta (x)=0$ in (2.9) as there the differential equation can be solved by $e^{\gamma (x)L}$ which is of a particularly simple form sometimes called a scaling solution.

Returning to our formal series context, taking the coefficient of $L^0$ in (2.9) we see that the anomalous dimension is nothing other than the linear term in L of G (in the cases of physical interest we will always have $g_0=1$ ). Furthermore, writing $G(x,L) = 1+\sum _{i\geq 0}g_i(x)L^i$ (so $\gamma (x) = g_1(x)$ ), and taking the coefficient of $L^{k-1}$ in (2.9), we obtain

$$\begin{align*}g_k(x) = \frac{1}{k}\left(\beta(x)\frac{d}{dx} + \gamma(x)\right)g_{k-1}(x) \end{align*}$$

so we see that knowing $\gamma (x)$ and $\beta (x)$ is sufficient to recursively determine all the $g_k(x)$ and hence to determine $G(x,L)$ . Furthermore, since $\beta (x)$ is essentially the linear term of the invariant charge, in the single equation case $\beta (x)$ is just a normalization away from anomalous dimension $\gamma (x)$ and in the system case is a linear combination of the anomalous dimensions of the different $G_i(x,L)$ in the system (see [Reference Yeats35]). Overall, then we conclude that knowing the anomalous dimension(s) suffices to determine the Green functions. Usually, we will nonetheless work at the level of the Green function, but sometimes it will be convenient to work only with the anomalous dimension, which we are free to do since this does not lose any information.

The goal of this section is to explain how (2.9) is intimately related to a certain Hopf algebra. As a starting point, notice that if $\gamma (x) = 0$ we have already seen this equation: by Theorem 2.10, it describes a bialgebra morphism $\mathsf {FdB} \to \mathbb {K}[L]$ . We will show that something similar is true for (2.9).

Recall that $\widetilde {\mathfrak {D}}$ denotes the group of formal power series with zero constant term and nonzero linear term under composition and ${\mathfrak {D}}$ the subgroup of $\delta $ -series, and that ${\mathfrak {D}}^{\mathrm {op}}$ is isomorphic to the character group of $\mathsf {FdB}$ . Now, observe that for $\Phi (x) \in \widetilde {\mathfrak {D}}$ the map $F(x) \mapsto F(\Phi (x))$ is a ring automorphism of $\mathbb {K}[[x]]$ . Moreover, composing these automorphisms corresponds to composing the series in reverse, so $\widetilde {\mathfrak {D}}^{\mathrm {op}}$ (and hence also ${\mathfrak {D}}^{\mathrm {op}}$ ) acts by automorphisms on $\mathbb {K}[[x]]$ . Consequently, they also act on $\mathbb {K}[[x]]^\times $ , the multiplicative group of power series with nonzero constant term. Let $\mathbb {K}[[x]]^\times _1$ be the subgroup of $\mathbb {K}[[x]]^\times $ consisting of those series with constant term 1. The Riordan group is the semidirect product $\mathfrak {R} = \mathbb {K}[[x]]^\times _1 \rtimes {\mathfrak {D}}$ . Explicitly, the elements consist of pairs $(F(x), \Phi (x))$ of series with $F(x) \in \mathbb {K}[[x]]^\times _1$ and $\Phi (x) \in {\mathfrak {D}}$ , with the operation

$$\begin{align*}(F(x), \Phi(x)) * (G(x), \Psi(x)) = (F(x)G(\Phi(x)), \Psi(\Phi(x))). \end{align*}$$

Remark 2.19 The Riordan group was first introduced – at least under that name – by Shapiro, Getu, Woan, and Woodson [Reference Shapiro, Getu, Woan and Woodson30]. It is usually thought of as a group of infinite matrices, via the correspondence

$$\begin{align*}(F(x), \Phi(x)) \mapsto \Big[ [x^i] F(x)G(x)^j \Big]_{i,j \in \mathbb{Z}_{\geq 0}} \end{align*}$$

sending a pair of series to their Riordan matrix. (This is simply a matrix representation of the natural action of $\mathfrak {R}$ on $\mathbb {K}[[x]]$ .) Conventions vary on whether or not to include the restrictions on coefficients; our choice matches the original definition in [Reference Shapiro, Getu, Woan and Woodson30] as well as being convenient for relating $\mathfrak {R}$ to a Hopf algebra.

We now wish to define a Hopf algebra with $\mathfrak {R}$ as its character group, similar to the Faá di Bruno Hopf algebra. We will call it the Riordan Hopf algebra and denote it by $\mathsf {Rio}$ . As an algebra, $\mathsf {Rio}$ is a free commutative algebra in two sets of generators $\{\pi _1, \pi _2, \dots \}$ and $\{y_1, y_2, \dots \}$ . The $\pi $ ’s will generate a copy of $\mathsf {FdB}$ ; in particular, their coproduct is still given by (2.6). (This inclusion $\mathsf {FdB} \to \mathsf {Rio}$ is dual to the quotient map $\mathfrak {R} \to {\mathfrak {D}}^{\mathrm {op}}$ coming from the semidirect product.) We assemble the y’s into a power series as well, this time in the more obvious way:

$$\begin{align*}Y(x) = 1 + \sum_{n \ge 1} y_n x^n. \end{align*}$$

Then, the coproduct is given by

(2.10) $$ \begin{align} \Delta y_n = \sum_{j = 0}^n [x^n] Y(x)\Pi(x)^j \otimes y_j. \end{align} $$

Analogously to Proposition 2.8, we easily get the following result.

Proposition 2.20 Let A be a commutative algebra and $\phi , \psi \colon \mathsf {Rio} \to A$ be algebra morphisms. Let $F(x) = \phi (Y(x))$ , $\Phi (x) = \phi (\Pi (x))$ , $G(x) = \psi (Y(x))$ , and $\Psi (x) = \psi (\Pi (x))$ . Then,

$$\begin{align*}(\phi * \psi)(Y(x)) = F(x)G(\Phi(x)) \end{align*}$$

and

$$\begin{align*}(\phi * \psi)(\Pi(x)) = \Psi(\Phi(x)). \end{align*}$$

Consequently, $\operatorname {Ch}(\mathsf {Rio}) \cong \mathfrak {R}$ .

We also have an analog of Proposition 2.9. Note that since the $\pi $ ’s generate a copy of $\mathsf {FdB}$ we can simply apply Proposition 2.9 itself to see how elements of the dual act on them. Thus, we only need the actions on $Y(x)$ .

Proposition 2.21 Suppose $\phi \in \mathsf {Rio}^*$ and let $F(x) = \phi (Y(x))$ and $\Phi (x) = \phi (\Pi (x))$ . Then,

  1. (i) $\phi \mathbin {\rightharpoonup } Y(x) = F(\Pi (x))Y(x)$ .

  2. (ii) If $\phi \in \operatorname {Ch}(\mathsf {FdB})$ then $Y(x) \mathbin {\leftharpoonup } \phi = F(x)Y(\Phi (x))$ .

  3. (iii) If $\phi \in \mathop {\mathfrak {ch}}(\mathsf {FdB})$ then $\Pi (x) \mathbin {\leftharpoonup } \phi = \Phi (x)Y'(x) + F(x)Y(x)$ .

Finally, we reach the main result of this section.

Theorem 2.22 Let $\phi \colon \mathsf {Rio} \to \mathbb {K}[L]$ be an algebra morphism and let $F(x, L) = \phi (Y(x))$ and $\Phi (x, L) = \phi (\Pi (x))$ . Let $\beta (x)$ be the linear term in L of $\Phi (x, L)$ and $\gamma (x)$ the linear term in L of $F(x, L)$ . Suppose $\phi $ is a bialgebra morphism when restricted to the subalgebra $\mathsf {FdB}$ . Then, $\phi $ is a bialgebra morphism on all of $\mathsf {Rio}$ if and only if $F(x, L)$ satisfies the renormalization group equation

(2.11) $$ \begin{align} \left(\frac{\partial}{\partial L} - \beta(x) \frac{\partial}{\partial x} - \gamma(x)\right)F(x, L) = 0. \end{align} $$

Proof By the same argument as Theorem 2.10, it is necessary and sufficient to have $\frac {d}{dL}\phi = (\operatorname {lin} \phi ) * \phi $ . Applying Item 2.21 (iii) gives the result.

Obviously, if we assume that $\phi $ is merely an algebra map $\mathsf {Rio} \to \mathbb {K}[L]$ then it is a bialgebra morphism if and only if it satisfies the conditions of both Theorems 2.10 and 2.22.

Remark 2.23 A result equivalent to Theorem 2.22 was proved by Bacher [Reference Bacher1, Proposition 7.1]. He does not take a Hopf algebra perspective but instead essentially works with the Lie algebra $\mathop {\mathfrak {ch}}(\mathsf {Rio})$ in a matrix representation and for an element $\sigma \in \mathop {\mathfrak {ch}}(\mathsf {Rio})$ corresponding to the pair $(\gamma (x), \beta (x))$ characterizes $\exp _{*}(L\sigma )$ as (the Riordan matrix of) the solution to (2.11) and (2.7), which is equivalent to our result by Theorem 2.7. That the PDE in question is in fact the renormalization group equation seems not to have been noticed.

The key insight here is that working with $\mathsf {Rio}$ lets us separate out Y and $\Pi $ in a way that’s ideally suited for understanding the renormalization group equation, and which we will use to understand the role of the invariant charge in the following.

2.4 Dyson–Schwinger equations

Now we are ready to give the honest DSEs, not just their combinatorial avatars, in the form in which we will use them.

As in the combinatorial set up, let P be a set (finite or infinite) and assign each $p \in P$ a weight $w_p \in \mathbb {Z}_{\geq 1}$ , such that there are only finitely many elements of each weight, and an insertion exponent $\mu _p \in \mathbb {K}$ . To each $p \in P,$ we also associate a 1-cocycle $\Lambda _p$ on the polynomial Hopf algebra $\mathbb {K}[L]$ . The DSE defined by these data is

(2.12) $$ \begin{align} G(x, L) = 1 + \sum_{p \in P} x^{w_p} \Lambda_p(G(x, L)^{\mu_p}). \end{align} $$

(Note that here and throughout, expressions such as $\Lambda _p(G(x, L)^{\mu _p})$ are to be interpreted as meaning that we expand the argument as a series in x and apply the operator coefficientwise.) Using the same data but with $B_{+}^{(p)}$ in place of $\Lambda _p$ we see that the corresponding combinatorial DSEs is the one in (2.3).

By Theorem 2.15 we can write

(2.13) $$ \begin{align} \Lambda_p f(L) = \int_0^L A_p(d/du) f(u)\,du \end{align} $$

for some series $A_p(L) \in \mathbb {K}[[L]]$ which physically is more or less the Mellin transform of p.

A particularly nice case, which covers most of the physically reasonable examples, is when there is a linear relationship between the weights and the insertion exponents: $\mu _p = 1 + sw_p$ for some $s \in \mathbb {K}$ . In this case, we can combine terms to get

(2.14) $$ \begin{align} G(x, L) = 1 + \sum_{k \ge 1} x^k \Lambda_k(G(x, L)^{1+sk}). \end{align} $$

Previous work on combinatorial solutions to DSEs has focused on this case, and indeed equations of this form have some special properties which we discuss below. However, the tubing solutions apply in the more general form (2.12).

Remark 2.24 These DSEs are not yet quite in the form one would usually see in the physics literature. As a first step, using Remark 2.17, we obtain the DSEs in the form usually presented by one of us [Reference Hihn and Yeats22, Reference Marie and Yeats25, Reference Yeats35]. Using (2.13), brings us closer to the integral equation form that perturbative DSEs are typically written in. See [Reference Yeats35] or [Reference Olson-Harris27] for a derivation relating them. The textbook presentation of DSEs is often one step further distant. Taking a perturbative or diagrammatic expansion (along with the usual techniques to reduce to one particle irreducible Green functions) bridges this last gap( see, e.g., [Reference Swanson32].

The diagrammatic form of the DSEs mentioned above is perhaps the easiest perspective to get an intuition for what these equations are telling us – they are telling us how to build all Feynman diagrams contributing to a given process by inserting simpler primitiveFootnote 3 Feynman diagrams into themselves. This explains some otherwise mysterious aspects of the nomenclature. The indexing set I for systems is typically giving the external edges of the diagram. The weight is the loop order (dimension of the cycle space) of the primitive diagram. The insertion exponent counts how the number of places a Feynman diagram of the given external edge structure can be inserted grows as the loop order grows. The function $A(\rho )/\rho $ is the Mellin transform of the Feynman integral for the primitive diagram regularized at the insertion place.

As before, we are interested not only in single equations but also systems. In that case, we partition our indexing set P into $\{P_i\}_{i \in I}$ for some finite set I which will index the equations in the system. Each $p \in P$ is still assigned a simple weight $w_p \in \mathbb {Z}_{\geq 1}$ but the insertion exponent is now an insertion exponent vector $\mu _p \in \mathbb {K}^I$ . We are now solving for a vector of series $\mathbf G(x, L) = (G_i(x, L))_{i \in I}$ . The system of equations we consider is

(2.15) $$ \begin{align} G_i(x, L) = 1 + \sum_{p \in P_i} x^{w_p} \Lambda_p(\mathbf G(x, L)^{\mu_p}). \end{align} $$

The corresponding combinatorial DSEs we saw in (2.5).

The analog of the special case (2.14) is the existence of a so-called invariant charge for the system, which we define in the following and which is closely related to $G(x,L)$ satisfying a renormalization group equation.

We will begin with the simplest case (2.12). By Theorem 2.22, we see that $G(x, L)$ satisfies a renormalization group equation if there exists a bialgebra morphism $\mathsf {Rio} \to \mathbb {K}[[L]]$ that sends $Y(x)$ to $G(x, L)$ . It is natural to lift to the combinatorial equation (2.3) and ask instead for a bialgebra morphism $\mathsf {Rio} \to \mathcal H_P$ that sends $Y(x)$ to $T(x)$ . The question then is where $\Pi (x)$ should be mapped. We wish to construct from $T(x)$ an auxiliary series $Q(x) \in \mathcal H_P[[x]]$ – the invariant charge – such that the map sending $\Pi (x)$ to $Q(x)$ and $Y(x)$ to $T(x)$ is a bialgebra morphism. Note that this is unique if it exists since the coproduct formula (2.10) allows us to recover it from the coproducts of coefficients of $T(x)$ . It turns out that the case in which we can ensure this exists is exactly the special case (2.14).Footnote 4

Proposition 2.25 Let $T(x) \in \mathcal H_{\mathbb {Z}_{\geq 1}}[[x]]$ be the solution of the combinatorial DSE

$$\begin{align*}T(x) = 1 + \sum_{k \ge 1} x^kB_{+}^{(k)}(T(x)^{1+sk}). \end{align*}$$

Then, the algebra morphism $\phi \colon \mathsf {Rio} \to \mathcal H_{\mathbb {Z}_{\geq 1}}$ defined by $\phi (Y(x)) = T(x)$ and $\phi (\Pi (x)) = xT(x)^s$ is a bialgebra morphism. As a consequence, the solution $G(x, L)$ to the corresponding DSE (2.14) satisfies the renormalization group equation

$$\begin{align*}\left(\frac{\partial}{\partial L} - sx\gamma(x)\frac{\partial}{\partial x} - \gamma(x)\right)G(x, L) = 0, \end{align*}$$

where $\gamma (x)$ is the linear term in L of $G(x, L)$ .

Remark 2.26 While the phrasing of Proposition 2.25 seems to be new, its content is not: the coproduct formula for $T(x)$ implied by combining this result with (2.10) is well-known. (See [Reference Yeats35, Lemma 4.6] for exactly this formula and for instance [Reference Borinsky7, Theorem 1], [Reference Prinz29, Proposition 4.2], and [Reference van Suijlekom, Fauser, Tolksdorf and Zeidler33, Proposition 7] for essentially equivalent formulas appearing in slightly different contexts.) Our proof is in some sense also the same as what had appeared before, but we believe this presentation is more conceptually clear. The generalization to distinguished insertion places in Section 3.3 is new.

We now work towards proving Proposition 2.25. It will be convenient to abuse notation here by neglecting to notate the obvious (but non-injective!) map from tensor products of power series to power series with tensor coefficients. In effect, we want to treat x as though it were a scalar, in line with our policy of always applying operators coefficientwise. With this in mind, we can rewrite (2.6) and (2.10) simply as

$$\begin{align*}\Delta \Pi(x) = \sum_{j \ge 0} \Pi(x)^{j+1} \otimes \pi_j \end{align*}$$

and

$$\begin{align*}\Delta Y(x) = \sum_{j \ge 0} Y(x)\Pi(x)^j \otimes y_j. \end{align*}$$

Our first lemma is a common generalization of both formulas.

Lemma 2.27 For any $s \in \mathbb {K}$ and $k \in \mathbb {Z}_{\geq 0}$ ,

$$\begin{align*}\Delta\big(Y(x)^s\Pi(x)^k\big) = \sum_{j \ge 0} Y(x)^s\Pi(x)^j \otimes [x^j]Y(x)^s\Pi(x)^k. \end{align*}$$

(Note that since $\Pi (x)$ has zero constant term, we can raise it only to nonnegative integer powers if we want to stay in the realm of power series.)

Proof Both sides are power series with coefficients that are polynomials in s, so it is sufficient to prove the case $s \in \mathbb {Z}_{\geq 0}$ . Then, by the coproduct formulas we can write

$$ \begin{align*} \Delta\big(Y(x)^s\Pi(x)^k\big) &= \sum_{i_1, \dots, i_s, j_1, \dots, j_k} Y(x)^s \Pi(x)^{i_1 + \dots + i_s + j_1 + \dots + j_k + k} \otimes y_{i_1} \cdots y_{i_s} \pi_{j_1} \cdots \pi_{j_k} \\ &= \sum_{j \ge 0} Y(x)^s \Pi(x)^j \otimes \sum_{i_1 + \dots + i_s + j_1 + \dots + j_k + k = j} y_{i_1} \cdots y_{i_s} \pi_{j_1} \cdots \pi_{j_k} \\ &= \sum_{j \ge 0} Y(x)^s \Pi(x)^j \otimes [x^j]Y(x)^s \Pi(x)^k \end{align*} $$

as desired.

For $n \ge 0$ , let $\mathsf {FdB}^{(n)}$ denote the subalgebra of $\mathsf {FdB}$ generated by $\pi _1, \dots , \pi _{n-1}$ (this should not be confused with the graded piece $\mathsf {FdB}_n$ ) and let $\mathsf {Rio}^{(n)}$ denote the subalgebra of $\mathsf {Rio}$ generated by $\pi _1, \dots , \pi _{n-1}$ and $y_1, \dots , y_n$ . From the coproduct formulas it is clear that are in fact sub-bialgebras. The following result is new as stated but encapsulates the main calculation used in standard proofs of Proposition 2.25.

Lemma 2.28 Suppose H is a bialgebra, $\phi \colon \mathsf {Rio} \to H$ is an algebra morphism, and $\{\Lambda _k\}_{k \in \mathbb {Z}_{\geq 1}}$ is a family of 1-cocycles on H. Let $\Phi (x) = \phi (\Pi (x))$ and suppose $\phi (Y(x)) = F(x),$ where $F(x)$ is the unique solution to

(2.16) $$ \begin{align} F(x) = 1 + \sum_{k \ge 1} \Lambda_k(F(x)\Phi(x)^k). \end{align} $$

Then, for $n \ge 0$ , if $\phi $ is a bialgebra morphism when restricted to $\mathsf {FdB}^{(n)}$ , it is also a bialgebra morphism when restricted to $\mathsf {Rio}^{(n)}$ .

Recall that by definition $\Pi (x)$ and hence also $\Phi (x)$ has zero constant term, so only the terms with $k \le n$ on the right side of (2.16) can contribute to the coefficient of $x^n$ . Thus, the equation really does have a unique solution.

Proof Since we are given that $\phi $ is an algebra morphism we must only prove it preserves the coproducts of the generators. We prove this by induction on n. In the base case, $\mathsf {FdB}^{(0)} = \mathsf {Rio}^{(0)} = \mathbb {K}$ so there is nothing to prove. Now, suppose that $n> 0$ and that $\phi $ is a bialgebra morphism when restricted to $\mathsf {Rio}^{(n-1)}$ and also preserves the coproduct of $\pi _{n-1}$ . Then, we must show it preserves the coproduct of $y_n$ . Note that when $k> 1$ , the coefficient $[x^n]Y(x)\Pi (x)^k$ does not contain $y_n$ , so its coproduct agrees with the formula from Lemma 2.27. Thus,

$$ \begin{align*} \Delta \phi(y_n) &= \Delta([x^n]F(x)) \\ &= \Delta\left(\sum_{k \ge 1} \Lambda_k\left([x^n]F(x)\Phi(x)^k\right)\right) \\ &= \left(\Lambda_k \otimes 1 + (\mathrm{id} \otimes \Lambda_k)\Delta\right)\left(\sum_{k \ge 1} [x^n]F(x)\Phi(x)^k\right) \\ &= [x^n]F(x) \otimes 1 + \sum_{k \ge 1} \sum_{j=0}^{n-k} [x^n] F(x)\Phi(x)^j \otimes [x^j] \Lambda_k(F(x)\Phi(x)^k) \\ &= [x^n]F(x) \otimes 1 + \sum_{j=0}^{n-1} [x^n]F(x)\Phi(x)^j \otimes [x^j] \left(\sum_{k \ge 1} \Lambda_k(F(x)\Phi(x)^k)\right) \\ &= \sum_{j=0}^{n} [x^n]F(x)\Phi(x)^j \otimes [x^j]F(x) \\ &= (\phi \otimes \phi)(\Delta y_n).\\[-34pt] \end{align*} $$

Remark 2.29 An obvious consequence of Lemma 2.28 is that if $\phi $ is already known to be a bialgebra morphism when restricted to $\mathsf {FdB}$ then it is a bialgebra morphism on all of $\mathsf {Rio}$ . This is not quite the right version of the statement for the application to DSEs, but it does give some interesting examples of series satisfying renormalization group equations. For instance, consider the map $\mathsf {FdB} \to \mathcal H$ given by $\pi _n \mapsto \ell _1^n$ . (Recall that $\ell _n$ is the n-vertex ladder, so, in particular, $\ell _1$ is the unique one-vertex tree.) It is a straightforward exercise to show that this is in fact a bialgebra morphism. Thus, we can extend this map to $\mathsf {Rio}$ by sending $Y(x)$ to the series $T(x)$ defined by

$$\begin{align*}T(x) = 1 + xB_{+}\left(\frac{T(x)}{1-\ell_1x}\right), \end{align*}$$

an example due to Dugan [Reference Dugan18] of a series not coming from a DSEs which nonetheless satisfies a renormalization group equation after applying a bialgebra morphism $\mathcal H \to \mathbb {K}[L]$ . In the spirit of Theorems 2.4 and 2.5, we can think of $T(x)$ as a generating function for plane trees with the property that one obtains a ladder after deleting all leaves.

We can now prove Proposition 2.25.

Proof of Proposition 2.25

We prove by induction on n that $\phi $ is a bialgebra morphism on $\mathsf {Rio}^{(n)}$ . For $n = 0$ this is trivial. Now, suppose $n> 0$ and that $\phi $ is a bialgebra morphism on $\mathsf {Rio}^{(n-1)}$ . In particular, $\phi $ is a bialgebra morphism on $\mathsf {FdB}^{(n-1)}$ , and we observe that since $[x^{n-1}]T(x)^s \in \phi (\mathsf {Rio}^{(n-1)})$ , by Lemma 2.27 we have

$$ \begin{align*} \Delta \phi(\pi_{n-1}) &= \Delta([x^{n-1}]T(x)^s) \\ &= \sum_{j \ge 0} [x^{n-1}] T(x)^s (xT(x)^s)^j \otimes [x^j]T(x)^s \\ &= \sum_{j \ge 0} [x^{n-1}] (xT(x)^s)^{j+1} \otimes [x^j]T(x)^s \\ &= \sum_{j \ge 0} [x^{n-1}] \phi(\Pi(x))^{j+1} \otimes \phi(\pi_j) \\ &= (\phi \otimes \phi)(\Delta \pi_{n-1}) \end{align*} $$

so $\phi $ is a bialgebra morphism on $\mathsf {FdB}^{(n)}$ . By Lemma 2.28, $\phi $ is thus a bialgebra morphism on $\mathsf {Rio}^{(n)}$ as wanted. The renormalization group equation then follows from Theorem 2.22.

Now we consider systems. The idea is the same, that we would like to write each equation of the system in a form that looks like (2.16). In general, this will only work if we have the same invariant charge for each equation. In terms of the setup, for $p \in P_i$ we want a linear relation

$$\begin{align*}\mu_p = 1_i + w_p\mathbf s \end{align*}$$

for some $\mathbf s = (s_i)_{i \in I} \in \mathbb {K}^I$ . As in the single-equation case, we may as well combine terms together to write the system in the form

(2.17) $$ \begin{align} G_i(x, L) = 1 + \sum_{k \ge 1} x^k\Lambda_{i,k}\left(G_i(x,L)\prod_j G_j(x, L)^{s_jk}\right). \end{align} $$

The corresponding combinatorial system then looks like

(2.18) $$ \begin{align} T_i(x) = 1 + \sum_{k \ge 1} B_{+}^{(i, k)}(T_i(x)Q(x)^k), \end{align} $$

where

(2.19) $$ \begin{align} Q(x) = x \prod_{i \in I} T_i(x)^{s_i}. \end{align} $$

We then have the following generalization of Proposition 2.25. (Note that most of the papers referenced in Remark 2.26 are actually for this version already.)

Theorem 2.30 Let $\mathbf T(x) \in \mathcal H_{I \times \mathbb {Z}_{\geq 1}}[[x]]^I$ be the solution to the combinatorial Dyson–Schwinger system (2.18). Then, for any $i \in I$ , the map $\phi _i\colon \mathsf {Rio} \to \mathcal H_{I \times \mathbb {Z}_{\geq 1}}$ defined by $\phi _i(Y(x)) = T_i(x)$ and $\phi _i(\Pi (x)) = Q(x)$ is a bialgebra morphism. As a consequence, the solution $\mathbf G(x, L)$ to the corresponding Dyson–Schwinger system (2.17) satisfies the renormalization group equations

$$\begin{align*}\left(\frac{\partial}{\partial L} - \beta(x) \frac{\partial}{\partial x} - \gamma_i(x)\right) G_i(x, L) = 0, \end{align*}$$

where $\gamma _i(x)$ is the linear term in L of $G_i(x, L)$ and

$$\begin{align*}\beta(x) = \sum_{i \in I} s_ix\gamma(x). \end{align*}$$

Proof We prove by induction on n that $\phi _i$ is a bialgebra morphism on $\mathsf {Rio}^{(n)}$ for all i. Supposing they are all bialgebra morphisms on $\mathsf {Rio}^{(n-1)}$ . Then, as in the proof of Proposition 2.25, we have

$$ \begin{align*} \Delta \phi_i(\pi_{n-1}) &= \Delta([x^{n-1}] Q(x)) \\ &= [x^{n-1}] \prod_{i \in I} \Delta(T_i(x)^{s_i}) \\ &= [x^{n-1}] \sum_{\alpha \in \mathbb{Z}_{\geq 0}^I} \prod_{i \in I} \left(T_i(x)^{s_i}Q(x)^{\alpha_i} \otimes [x^{\alpha_i}] T_i(x)^{s_i}\right) \\ &= \sum_{\alpha \in \mathbb{Z}_{\geq 0}^I} [x^n] Q(x)^{|\alpha| + 1} \otimes \prod_{i \in I} [x^{\alpha_i}]T_i(x)^{s_i} \\ &= \sum_{j \ge 0} [x^n] Q(x)^{j+1} \otimes [x^{j+1}]Q(x) \\ &= (\phi \otimes \phi)(\Delta \pi_{n-1}) \end{align*} $$

and thus $\phi _i$ is a bialgebra morphism on $\mathsf {FdB}^{(n)}$ and hence on $\mathsf {Rio}^{(n)}$ by Lemma 2.28. The renormalization group equation then follows from Theorem 2.22.

2.5 The anomalous dimension revisited

Consider the Dyson–Schwinger system (2.17). Write the 1-cocycles in the form given by Theorem 2.15:

$$\begin{align*}\Lambda_{i,k} f(L) = \int_0^L A_{i,k}(d/du)f(u)\,du. \end{align*}$$

We can then differentiate both sides of (2.17) with respect to L:

(2.20) $$ \begin{align} \frac{\partial}{\partial L} G_i(x, L) = \sum_{k \ge 1} x^k A_{i,k}(\partial/\partial L) G_i(x, L)\left(\prod_j G_j(x, L)^{s_j}\right)^k. \end{align} $$

For the moment writing $\Psi (x,L) =\prod _j G_j(x,L)^{s_j}$ and using Theorems 2.10, 2.22, and 2.30, we have that

$$\begin{align*}\frac{\partial \Psi(x,L)}{\partial L} = \beta(x)\frac{\partial \Psi(x,L)}{\partial x} \end{align*}$$

and hence

$$ \begin{align*} & \frac{\partial}{\partial L} \left(G_i(x, L) \Psi(x,L)^k\right) \\ & = \left( \left(\beta(x)\frac{\partial}{\partial x} + \gamma_i(x)\right)G_i(x,L)\right)\Psi(x,L)^k + \beta(x)G_i(x,L)\Psi(x,L)^{k-1}\left(\frac{\partial}{\partial x}\Psi(x,L)\right) \\ & = \left(\beta(x)\frac{\partial}{\partial x} + \gamma_i(x)\right)G_i(x,L)\Psi(x,L). \end{align*} $$

Putting this back into (2.20), we obtain

(2.21) $$ \begin{align} \frac{\partial}{\partial L} G_i(x, L) = \sum_{k \ge 1} A_{i,k}\left(\beta(x)\frac{\partial}{\partial x} + \gamma_i(x)\right) G_i(x, L)\left(\prod_j G_j(x, L)^{s_j}\right)^k. \end{align} $$

If we then set $L = 0$ , we obtain the following somewhat strange equation satisfied by the anomalous dimension:

(2.22) $$ \begin{align} \gamma_i(x) = \sum_{k \ge 1} A_{i,k}\left(\beta(x)\frac{\partial}{\partial x} + \gamma_i(x)\right) x^k. \end{align} $$

Two special cases are interesting. First, in the case of only a single term $k = 1$ , (2.22) can be rewritten as a pseudo-differential equation

(2.23) $$ \begin{align} A_i\left(\beta(x)\frac{\partial}{\partial x} + \gamma_i(x)\right)^{-1} \gamma_i(x) = x. \end{align} $$

This formulation is due to Balduf [Reference Balduf2, Equation 2.19] but special cases have been known for longer [Reference Bellon5, Reference Broadhurst and Kreimer9, Reference Yeats35]. If $A_i(x)$ is the reciprocal of a polynomial, which occurs in some physically relevant cases, (2.23) becomes an honest differential equation. These equations have been analyzed by Borinsky, Dunne, and the second author [Reference Borinsky, Dunne and Yeats8] using the combinatorics of tubings.

The other special case is that of a linear system, i.e., $\beta (x) = 0$ . In this case, (2.22) becomes a functional equation

(2.24) $$ \begin{align} \gamma_i(x) = \sum_{k \ge 1} x^k A_{i,k}(\gamma_i(x)). \end{align} $$

In the intersection of these two special cases, a linear equation with a single term, (2.24) can be solved explicitly by Lagrange inversion (see [Reference Balduf, Cantwell, Ebrahimi-Fard, Nabergall, Olson-Harris and Yeats4, Lemma 2.32].

2.6 Tubings

We now introduce the combinatorial objects that we will use to understand 1-cocycles. We will only be interested in trees, but the basic definitions can be given in the context of an arbitrary finite poset P. A tube is a connected convex subset of P. For tubes $X, Y$ write $X \to Y$ if $X \cap Y = \emptyset $ and there exist $x \in X$ and $y \in Y$ such that $x < y$ . A collection $\tau $ of tubes is called a tubing if it satisfies the following conditions:

  • (Laminarity) If $X, Y \in \tau $ then either $X \cap Y = \emptyset $ , $X \subseteq Y$ , or $X \supseteq Y$ .

  • (Acyclicity) There do not exist tubes $X_1, \dots , X_k \in \tau $ with $X_1 \to X_2 \to \dots \to X_k \to X_1$ .

Tubings of posets (also called pipings) were introduced by Galashin [Reference Galashin21] to index the vertices of a certain polytope associated with P, the P-associahedron. They were rediscovered (in the case of trees) by the authors of [Reference Balduf, Cantwell, Ebrahimi-Fard, Nabergall, Olson-Harris and Yeats4] in the present context. Note that for trees the acyclicity condition is trivial.

Remark 2.31 Galashin defines a proper tube to be one which is neither a singleton nor the entirety of P, and a proper tubing to be one consisting only of proper tubes. Only the proper tubes and tubings play a role in the definition of the poset associahedron, but for us it will be sensible to include the improper ones. Note that if one restricts attention to maximal tubings (which we largely will do) then this makes no combinatorial difference, as a maximal tubing contains all of the improper tubes and removing them maps the set of maximal tubings bijectively to the set of maximal proper tubings.

Remark 2.32 Tubings of posets are only loosely related to the better-known notion of tubings of graphs introduced by Carr and Devadoss [Reference Carr and Devadoss11]. For graphs, a tube is defined to be a set of vertices which induces a connected subgraph and a tubing is a set of tubes satisfying the laminarity condition along with a certain non-adjacency condition which is entirely different from the acyclicity condition for poset tubings. Thus, the notions should not be confused. However, in the case of trees there is a close connection: tubings of a rooted tree (as a poset) are in bijection with tubings of the line graph of the tree. (This is essentially a special case of a result of Ward [Reference Ward34, Lemma 3.17] which is stated in terms of related objects called nestings. See [Reference Balduf, Cantwell, Ebrahimi-Fard, Nabergall, Olson-Harris and Yeats4, Section 6] for a discussion of this in our language.)

Remark 2.33 A subset of a rooted tree is convex and connected (in the order-theoretic sense) if and only if it is connected in the graph-theoretic sense. Thus, the set of tubings of a rooted tree is really an invariant of the underlying unrooted tree. However, the statistics on tubings in which we will be interested do depend on the root and are best thought of in terms of the partial ordering.

The laminarity condition implies that if $\tau $ is a tubing of P, the poset of tubes ordered by inclusion is a forest. In the case of a maximal tubing of a connected poset, there is a unique maximal tube (namely, P itself) and each non-singleton tube has exactly two maximal tubes properly contained within it. Relative to X, one of these tubes is a downset and one is an upset. Taking the downset as the left child and the upset as the right child, the tubes of a maximal tubing thus have the structure of a binary plane tree. For this reason (and to avoid confusion with graph tubings), maximal tubings of rooted trees were referred to as binary tubings in [Reference Balduf, Cantwell, Ebrahimi-Fard, Nabergall, Olson-Harris and Yeats4] and we will also use this language.

Henceforth, we shall restrict our attention exclusively to binary tubings of rooted trees. We will write $\mathrm {Tub}(t)$ for the set of binary tubings of t. We will call a tube a lower tube (resp. upper tube) if it is a downset (resp. upset) in its parent in the tree of tubes. We will also consider t itself to be a lower tube; this ensures that each vertex is the root of exactly one lower tube. Given a vertex v, define the rank $\operatorname {rk}(\tau , v)$ of v in $\tau $ to be the number of upper tubes rooted at v.Footnote 5 We will write $b(\tau )$ for the number of tubes of $\tau $ containing the root of t; note that we clearly have $b(\tau ) = \operatorname {rk}(\tau , \operatorname {rt} t) + 1$ .

Remark 2.34 Our definition of the rank refers only to the upper tubes, but it can be equivalently defined in terms of lower tubes only: for each upper tube rooted at v there is a corresponding lower tube, with the property that there is no lower tube of $\tau $ lying strictly between it and the unique lower tube rooted at v. Conversely, any such lower tube corresponds to an upper tube rooted at v. In other words, considering the lower tubes of $\tau $ as a poset ordered by containment, $\operatorname {rk}(\tau , v)$ is the number of lower tubes that are covered by the unique lower tube rooted at v.

Binary tubings of rooted trees have a recursive structure which we will make essential use of.

Proposition 2.35 Let t be a rooted tree with $|t|> 1$ . There is a bijection between binary tubings of t and triples $(t', \tau ', \tau "),$ where $t'$ is a proper subtree (principal downset) of t, $\tau ' \in \mathrm {Tub}(t')$ , and $\tau " \in \mathrm {Tub}(t \setminus t')$ . Moreover, this bijection satisfies

$$\begin{align*}\operatorname{rk}(\tau, v) = \begin{cases} \operatorname{rk}(\tau', v) & v \in t' \\ \operatorname{rk}(\tau", v) + 1 & v = \operatorname{rt} t \\ \operatorname{rk}(\tau", v) & \text{otherwise} \end{cases} \end{align*}$$

and $b(\tau ) = b(\tau ") + 1$ .

Proof By the discussion above, there are two maximal tubes $t', t"$ properly contained in the largest tube t, where $t'$ is a downset and $t" = t \setminus t'$ is an upset and both are connected. A connected downset in a rooted tree is a subtree; since the complement is nonempty it must be that $t'$ is a proper subtree. Let

$$ \begin{align*} \tau' &= \{u \in \tau\colon u \subseteq t'\} \end{align*} $$

and

$$ \begin{align*} \tau" &= \{u \in \tau\colon u \cap t' = \emptyset\}. \end{align*} $$

By the laminarity condition, we have $\tau = \{t\} \cup \tau ' \cup \tau "$ . Since each tube in $\tau '$ and $\tau "$ still contains two maximal tubes within it, these are still maximal tubings of $t'$ and $t",$ respectively, i.e., $\tau ' \in \mathrm {Tub}(t')$ and $\tau " \in \mathrm {Tub}(t \setminus t')$ . Note that the upper tubes of $\tau '$ and $\tau "$ are also upper tubes of $\tau $ , whereas $t"$ is a lower tube in $\tau "$ and an upper tube in $\tau $ . Thus, the statement about ranks follows. Since $b(\tau ) = \operatorname {rk}(\tau , \operatorname {rt} t) + 1$ and $\operatorname {rt} t \in t"$ the statement about the b-statistic also follows.

Remark 2.36 Observe that in a binary tubing we split the tree into an upper and lower part just as in the definition of the coproduct, but with the key difference that they are both required to be trees. To make this more algebraic, let $P_{\mathrm {lin}}\colon \mathcal H \to \mathcal H$ be the projection onto the subspace spanned by trees. Then, the linearized coproduct is $\Delta _{\mathrm {lin}} = (P_{\mathrm {lin}} \otimes P_{\mathrm {lin}})\Delta $ . In effect, $\Delta _{\mathrm {lin}}$ looks the same as the usual coproduct but only includes terms where both tensor factors are trees rather than arbitrary forests. Unlike the coproduct, the linearized coproduct fails to be coassociative: there are multiple different maps $\mathcal H \to \mathcal H^{\otimes k}$ that can be built by iterating it. For instance, in the case $k = 3$ there are distinct maps $(\Delta _{\mathrm {lin}} \otimes \mathrm {id})\Delta _{\mathrm {lin}}$ and $(\mathrm {id} \otimes \Delta _{\mathrm {lin}})\Delta _{\mathrm {lin}}$ . It is not hard to see that if we iterate all the way to $k = |t|$ , the terms that we can get from all of these maps taken collectively correspond exactly to the binary tubings.

The linearized coproduct is itself a nice algebraic object as it is co-pre-Lie and specifically is dual to the pre-Lie product given by insertion of rooted trees. See [Reference Chapoton and Livernet12] for more on the structure of this product.

We are now ready to give the tubing expression for maps $\mathcal H_I \to \mathbb {K}[L]$ induced by the universal property Theorem 2.2 which was the main result of [Reference Balduf, Cantwell, Ebrahimi-Fard, Nabergall, Olson-Harris and Yeats4]. Let us fix a set I of decorations and a family of 1-cocycles

$$\begin{align*}\Lambda_i f(L) = \int_0^L A_i(d/du)f(u)\,du, \end{align*}$$

where

$$\begin{align*}A_i(L) = \sum_{n \ge 0} a_{i,n} L^n. \end{align*}$$

Given an I-tree t, let us write $d(t)$ for the decoration of the root vertex, and $d(v)$ for the decoration of a vertex v. For a tubing $\tau $ of t, we define the Mellin monomial

$$\begin{align*}\operatorname{mel}(\tau) = \prod_{\substack{v \in t \\ v \ne \operatorname{rt} t}} a_{d(v), \operatorname{rk}(\tau, v)}. \end{align*}$$

We call this the Mellin monomial since it is a monomial made of coefficients from the Mellin transforms of the primitives driving the DSE. With these definitions in hand, we can state the formula.

Theorem 2.37 [Reference Balduf, Cantwell, Ebrahimi-Fard, Nabergall, Olson-Harris and Yeats4, Theorem 4.2]

With the above setup, the unique map $\phi \colon \mathcal H_I \to \mathbb {K}[L]$ satisfying $\phi B_{+}^{(i)} = \Lambda _i \phi $ is given on trees by the formula

$$\begin{align*}\phi(t) = \sum_{\tau \in \mathrm{Tub}(t)} \operatorname{mel}(\tau) \sum_{k=1}^{b(\tau)} a_{d(t), b(\tau) - k} \frac{L^k}{k!}. \end{align*}$$

Example 2.38 Let t be the tree that appears in Figure 1a. Computing the contributions of the five tubings and summing them up, we get

$$ \begin{align*} \phi(t) ={} &a_0^3\left(a_3L + a_2 \frac{L^2}{2!} + a_1 \frac{L^3}{3!} + a_0 \frac{L^4}{4!}\right) + 2a_0^2a_1\left(a_2L + a_1 \frac{L^2}{2!} + a_0 \frac{L^3}{3!}\right) \\ &+ 2a_0^3\left(a_2L + a_1 \frac{L^2}{2!} + a_0 \frac{L^3}{3!}\right), \end{align*} $$

where the second and third tubing in the figure give the same contribution, as do the fourth and fifth. These coincidences can be explained combinatorially by the fact that in both cases the offending pair of tubings differ in the upper tubes but have the exact same set of lower tubes, which in light of Remark 2.34 is sufficient to determine the Mellin monomial and b-statistic.

Figure 1 Examples of binary tubings. Upper and lower tubes highlighted in different colours.

Combining Theorems 2.6 and 2.37 we can solve the single insertion place DSEs and systems thereof. First, we will solve the combinatorial DSE giving a series in $T(x) \in \mathcal H_P[[x]]$ which encodes the recursive structure of the DSE. We then apply the unique map $\phi \colon \mathcal H_P \to \mathbb {K}[L]$ satisfying $\phi B_{+}^{(p)} = \Lambda _p \phi $ which exists by Theorem 2.2 to get $G(x, L) = \phi (T(x))$ which will solve the DSE. The combinatorial expansion for $\phi $ (Theorem 2.37) then gives a combinatorial expansion for the solution of the DSE.

Specifically, we obtain the following theorem.

Theorem 2.39 [Reference Balduf, Cantwell, Ebrahimi-Fard, Nabergall, Olson-Harris and Yeats4, Theorem 2.12]

The unique solution to (2.12) is

$$\begin{align*}G(x, L) = 1 + \sum_{t \in \mathcal T(P)}\left(\prod_{v \in t} \mu(v)^{\underline{\operatorname{od}(v)}} \right) \sum_{\tau \in \mathrm{Tub}(t)} \operatorname{mel}(\tau) \sum_{k=1}^{b(\tau)} a_{d(t), b(\tau) - k} \frac{x^{w(t)}L^k}{|\mathrm{Aut}(t)|k!}. \end{align*}$$

In particular, the solution to the special case (2.14) is

$$\begin{align*}G(x, L) = 1 + \sum_{t \in \mathcal T(\mathbb{Z}_{\geq 1})}\left(\prod_{v \in t} (1 + sw(v))^{\underline{\operatorname{od}(v)}} \right) \sum_{\tau \in \mathrm{Tub}(t)} \operatorname{mel}(\tau) \sum_{k=1}^{b(\tau)} a_{w(\operatorname{rt} t), b(\tau) - k} \frac{x^{w(t)}L^k}{|\mathrm{Aut}(t)|k!}. \end{align*}$$

Similarly, for systems of DSEs we have the following result.

Theorem 2.40 [Reference Balduf, Cantwell, Ebrahimi-Fard, Nabergall, Olson-Harris and Yeats4, Theorem 4.8]

The unique solution to the system (2.15) is

$$\begin{align*}G_i(x, L) = 1 + \sum_{t \in \mathcal T_i(P)} \left(\prod_{v \in t} \mu(v)^{\underline{\operatorname{\mathbf{od}}(v)}} \right) \sum_{\tau \in \mathrm{Tub}(t)} \operatorname{mel}(\tau) \sum_{k=1}^{b(\tau)} a_{d(t), b(\tau) - k} \frac{x^{w(t)}L^k}{|\mathrm{Aut}(t)|}. \end{align*}$$

Remark 2.41 The tubing expansions can be related to the usual Feynman diagram expansions of DSEs by taking the trees that appear in these expansions as insertion trees which encode the way a Feynman diagram is built from primitive diagrams. One can in principle recover the contribution of an individual Feynman diagram by an appropriately weighted sum over (tubings of) insertion trees for that diagram. (See for instance [Reference Kreimer24]). A more detailed discussion of how the tubing expansion relates to the Feynman diagrams can be found in [Reference Balduf, Cantwell, Ebrahimi-Fard, Nabergall, Olson-Harris and Yeats4, Remark 5.25].

3 Multiple insertion places

3.1 Context

With the background in hand, we are ready to move to considering multiple insertion places. Before proving the results, we should return to the set up to understand what multiple insertion places means.

For the moment, we want to view the DSEs diagrammatically as equations telling us how to insert primitive Feynman diagrams into each other to obtain series of Feynman diagrams (then applying Feynman rules will bring us to the $G(x,L)$ we’ve been working with). To keep things simple for the intuition (though the general picture is much the same) let’s consider the case with just one 1-cocyle $\Lambda $ in (2.12) and with $w = 1, \mu = -1$ . Consider the way the Mellin transform $A(\rho )$ comes into the DSEs when the 1-cocycle is rewritten according to Remark 2.17. $A(\rho )$ is essentially the Feynman integral of the primitive with $\rho $ acting as a regulator on the edge into which we are inserting (see for instance [Reference Yeats35] for a derivation given in close to this language). In particular, this implies that all the insertions are into the same edge. This is, in fact, easiest to see with the DSEs in its integral equation form. For instance, a particular instantiation of the case presently at hand in a massless Yukawa theory, and phrased in a compatible language to what we’re using here can be found at the end of [Reference Yeats35, Section 3.2]. In the DSE, in this form, the way we see that the insertion is in a single propagator is by the scale variable of the inserted $G(x,\log k^2)$ s inside the integral being the log of the momentum squared of one particular propagator – the one into which we are inserting.

Let us return to the case with multiple 1-cocycles, hence with multiple primitive Feynman diagrams. If we are willing to insert symmetrically into all insertion places, we can get around the fact that our single variable Mellin transforms only accounts for a single insertion place (see [Reference Yeats35, Section 2.3.3]), but for a finer understanding, we would like to be able to work with multiple insertions places more honestly. In particular, this means that our Mellin transforms of our primitives should be regularized on all the edges with different variables on each, giving $A(\rho _1, \rho _2, \ldots , \rho _m)$ . Then, the recursive appearances of $G(x, L)$ or of the $G_i(x,L)$ should correspond to different $\rho _i$ according to the type of edge in the Feynman diagram. With the 1-cocycles written in the form of Remark 2.17, this means we are interested in equations of the form

(3.1) $$ \begin{align} &G(x, L) = 1 + x G(x, \partial/\partial \rho_1)^{-1} \cdots G(x, \partial/\partial \rho_m)^{-1} (e^{L(\rho_1 + \dots + \rho_m)} - 1) F(\rho_1, \dots, \rho_m)\big|_{\rho=0}, \end{align} $$

where each edge has its own variable in the Mellin transform, as well as similar systems of equations. Such equations have been set up in language similar to this by one of us [Reference Yeats35] and considered further by Nabergall [Reference Nabergall26, Section 4.2] but until now we had no combinatorial handle of the sort given by the tubing expansions above. The goal of this article is to give tubing expansion solutions to such equations, thus giving combinatorial solutions to all single scale DSEs.

Mathematically, the set up is as follows. We again have a set P which will index our cocycles, but to each $p \in P$ we associate a finite set $E_p$ of insertion places. Each insertion place e has its own insertion exponent $\mu _{e}$ ; sometimes it will still be convenient to refer to the overall insertion exponent

$$\begin{align*}\mu_p = \sum_{e \in E_p} \mu_{e}. \end{align*}$$

Finally, to each p we associate a vector of indeterminates ${\mathbf L}_p = (L_e)_{e \in E_p}$ and a 1-cocycle $\Lambda _p \in \mathrm {Z}^1(\mathbb {K}[L], \mathbb {K}[{\mathbf L}_p])$ . The Dyson–Schwinger associated with these data is

(3.2) $$ \begin{align} G(x, L) = 1 + \sum_{p \in P} x^{w_p} \Lambda_p\left(\prod_{e \in E_p} G(x, L_e)^{\mu_e}\right). \end{align} $$

We will also consider systems. As before, we partition our index set P into $\{P_i\}_{i \in I}$ and replace the insertion exponents with insertion exponent vectors. Our system is then

(3.3) $$ \begin{align} G_i(x, L) = 1 + \sum_{p \in P_i} x^{w_p} \Lambda_p\left(\prod_{e \in E_p} \mathbf G(x, L_e)^{\mu_{e}}\right). \end{align} $$

3.2 1-cocycles and tensor powers

We next need to upgrade our results on 1-cocycles to tensor powers of a bialgebra H as this will be the correct algebraic structure to account for multiple insertion places.

Note that there is a canonical comodule homomorphism $\mu \colon H^{\otimes r} \to H$ , namely, the multiplication map $\mu (h_1 \cdots h_r) = h_1 \cdots h_r$ . By Item 2.13 (iii), we can build various 1-cocycles $\Lambda \mu \in \mathrm {Z}^1(H, H^{\otimes r})$ for various $\Lambda \in \mathrm {Z}^1(H)$ . We will call such cocycles boring; as we will see, they are generally the trivial case for our results.

As in Section 2.2, we focus on the case of $H = \mathbb {K}[L]$ . We identify $\mathbb {K}[L]^{\otimes r}$ with $\mathbb {K}[L_1, \dots , L_r]$ made into a comodule with coaction

$$\begin{align*}\delta {\mathbf L}^\alpha = \sum_{\beta \le \alpha} \binom{\alpha}{\beta} L^{|\alpha| - |\beta|} \otimes {\mathbf L}^\beta. \end{align*}$$

We now give a generalization of Theorem 2.15 to tensor powers.

Theorem 3.1 For any series $A({\mathbf L}) \in \mathbb {K}[[L_1, \dots , L_r]]$ , the map $\mathbb {K}[L_1, \dots , L_r] \to \mathbb {K}[L]$ given by

(3.4) $$ \begin{align} f({\mathbf L}) \mapsto \int_0^L A(\partial/\partial u_1, \dots, \partial/\partial u_r) f(u_1, \dots, u_r) \big|_{u_1 = \dots = u_r = u}\,du \end{align} $$

is a 1-cocycle. Moreover, all 1-cocycles $\mathbb {K}[L_1, \dots , L_r] \to \mathbb {K}[L]$ are of this form.

Proof Let $\psi \colon \mathbb {K}[L_1, \dots , L_r] \to \mathbb {K}$ be given by

$$\begin{align*}\psi({\mathbf L}^\alpha) = [{\mathbf L}^\alpha] A({\mathbf L}). \end{align*}$$

Then, the operator defined by (3.4) is simply $\mathcal {I} \circledast \psi ,$ where $\mathcal {I}$ is the usual integral cocycle on $\mathbb {K}[L]$ (see, e.g., 2.12) and the $\circledast $ notation was defined in Section 2.2. Thus, by Item 2.13 (iii) and Lemma 2.14, this operator is indeed a 1-cocycle.

Conversely, suppose $\Lambda $ is a 1-cocycle. We wish to find a series $A({\mathbf L})$ such that $\Lambda $ has the form (3.4). For $\alpha \in \mathbb {Z}_{\geq 0}^r$ take $a_\alpha = \operatorname {lin} \Lambda ({\mathbf L}^\alpha )$ and let $A({\mathbf L})$ be the exponential generating function for these:

$$\begin{align*}A({\mathbf L}) = \sum_{\alpha \in \mathbb{Z}_{\geq 0}^r} \frac{a_\alpha {\mathbf L}^\alpha}{\alpha_1! \cdots \alpha_r!}. \end{align*}$$

Now, observe that for a polynomial $f(L)$ we have $\frac {df(L)}{dL} = \operatorname {lin} \mathbin {\rightharpoonup } f(L)$ . With this in mind,

$$ \begin{align*} \frac{d \Lambda({\mathbf L}^\alpha)}{dL} &= \operatorname{lin} \mathbin{\rightharpoonup} \Lambda({\mathbf L}^\alpha) \\ &= (\mathrm{id} \otimes \operatorname{lin})(\Lambda \otimes 1 + (\mathrm{id} \otimes \Lambda)\delta){\mathbf L}^\alpha \\ &= (\mathrm{id} \otimes \operatorname{lin} \Lambda)\delta {\mathbf L}^\alpha \\ &= \sum_{\beta \le \alpha} \binom{\alpha}{\beta} a_\beta L^{|\alpha| - |\beta|} \\ &= \sum_{\beta \le \alpha} \frac{a_\beta}{\beta_1! \cdots \beta_r!} \prod_{i=1}^r \frac{d^{\beta_i}L^{\alpha_i}}{dL^{\beta_i}} \\ &= \sum_{\beta \le \alpha} \frac{a_\beta}{\beta_1! \cdots \beta_r!} \frac{\partial^{|\beta|} \mathbf u^\alpha}{\partial u_1^{\beta_1} \cdots \partial u_r^{\beta_r}} \bigg|_{u_1 = \dots = u_r = L}\\ &= A\left(\frac{\partial}{\partial u_1}, \dots, \frac{\partial}{\partial u_r}\right) \mathbf u^\alpha \bigg|_{u_1 = \dots = u_r = L} \end{align*} $$

and hence by linearity, for any polynomial $f({\mathbf L})$ we have

$$\begin{align*}\frac{d \Lambda f({\mathbf L})}{dL} = A\left(\frac{\partial}{\partial u_1}, \dots, \frac{\partial}{\partial u_r}\right) f(\mathbf u) \bigg|_{u_1 = \dots = u_r = L} \end{align*}$$

which is also the derivative of the right-hand side of (3.4). But since $\Lambda f({\mathbf L})$ must have zero constant term by Item 2.13 (ii), it is exactly given by (3.4), as wanted.

Remark 3.2 The multiplication map $\mathbb {K}[L]^{\otimes r} \to \mathbb {K}[L]$ corresponds to the map $\mathbb {K}[{\mathbf L}] \to \mathbb {K}[L]$ that substitutes L for all of the variables. The adjoint map $\mathbb {K}[[L]] \to \mathbb {K}[[{\mathbf L}]]$ is the substitution $L \mapsto L_1 + \dots + L_r$ . Thus, the boring 1-cocycles correspond to series that expand in powers of the sum $L_1 + \dots + L_r$ .

Next, we construct the analog in this setting of the Connes–Kreimer Hopf algebra. Let $\widetilde {\mathcal T}_r$ be the set of unlabelled rooted trees with edges decorated by elements of $\{1, \dots , r\}$ and $\widetilde {\mathcal F}_r$ the corresponding set of forests. Define $\widetilde {\mathcal H}_r$ to be the free vector space on $\widetilde {\mathcal F}_r$ , made into an algebra with disjoint union as multiplication and a downset/upset coproduct exactly as in $\mathcal H$ but preserving the decorations on the (remaining) edges. Now, define $\tilde B_{+}\colon \widetilde {\mathcal H}_r^{\otimes r} \to \widetilde {\mathcal H}_r$ as follows: for $f_1, \dots , f_r \in \widetilde {\mathcal F}_r$ , $\tilde B_{+}(f_1 \otimes \dots \otimes f_r)$ is the tree obtained from the forest $f_1 \cdots f_r$ by adding a new root with an edge to the root of each component, where the edges to $f_i$ have decoration i. Note that clearly $\widetilde {\mathcal H}_1 \cong \mathcal H$ and in this case $\tilde B_{+}$ is just the usual $B_{+}$ .

Very similar combinatorial and algebraic structures are used in the study of regularity structures, see [Reference Bruned, Hairer and Zambotti10].

Proposition 3.3 $\tilde B_{+} \in \mathrm {Z}^1(\widetilde {\mathcal H}_r, \widetilde {\mathcal H}_r^{\otimes r})$ .

Proof Let $t = \tilde B_{+}(f_1 \otimes \dots \otimes f_r)$ . The only downset in t that contains the root is all of t, and each other downset is the union of a downset in $f_i$ for each i. In the complementary upset, all edges on the root have the same decoration as they do in t, so

$$ \begin{align*} \Delta t &= \sum_{f \in J(t)} f \otimes (t \setminus f) \\ &= 1 \otimes t + \sum_{f_1' \in J(f_1)} \cdots \sum_{f_r' \in J(f_r)} f_1' \cdots f_r' \otimes \tilde B_{+}((f_1 \setminus f_1') \otimes \dots \otimes (f_r \setminus f_r')). \end{align*} $$

On the other hand, the left coaction $\delta $ of $\widetilde {\mathcal H}_r$ on $\widetilde {\mathcal H}_r^{\otimes r}$ is, by definition,

$$\begin{align*}\delta(f_1 \otimes \dots \otimes f_r) = \sum_{f_r' \in J(f_r)} f_1' \cdots f_r' \otimes (f_1 \setminus f_1') \otimes \dots \otimes (f_r \setminus f_r'). \end{align*}$$

Comparing these, we see that indeed

$$\begin{align*}\Delta \tilde B_{+} = 1 \otimes \tilde B_{+} + (\mathrm{id} \otimes \tilde B_{+})\delta.\\[-34pt] \end{align*}$$

The universal property of $\mathcal H$ naturally extends to $\widetilde {\mathcal H}_r$ . The proof is essentially the same as the original Connes–Kreimer universal property result.

Theorem 3.4 Let A be a commutative algebra and $\Lambda \colon A^{\otimes r} \to A$ a linear map. There exists a unique map $\phi \colon \widetilde {\mathcal H}_r \to A$ such that $\phi \tilde B_{+} = \Lambda \phi ^{\otimes r}$ . Moreover, if A is a bialgebra and $\Lambda $ is a 1-cocycle then $\phi $ is a bialgebra morphism.

Proof Suppose $t \in \widetilde {\mathcal T}_r$ . We can uniquely write t in the form $t = \tilde B_{+}(f_1 \otimes \dots \otimes f_r)$ . Then, we can recursively set

$$\begin{align*}\phi(t) = \Lambda(\phi(f_1) \otimes \dots \otimes \phi(f_r)), \end{align*}$$

where for a forest, $\phi (f)$ is the product over the components. Clearly, this is a well-defined algebra map and the unique one satisfying the desired identity.

If A is a bialgebra and $\Lambda $ is a 1-cocycle, we compute

$$ \begin{align*} \Delta \phi(t) &= \Delta \Lambda(\phi(f_1) \otimes \dots \otimes \phi(f_r)) \\ &= \phi(t) \otimes 1 + (\mathrm{id} \otimes \Lambda) \delta(\phi(f_1) \otimes \dots \otimes \phi(f_r)), \end{align*} $$

where $\delta $ is the coaction of A on $A^{\otimes r}$ . Now, suppose that $\phi $ preserves coproducts for each of the forests, i.e.,

$$\begin{align*}\Delta \phi(f_i) = \sum_{f_i' \in J(f_i)} \phi(f_i') \otimes \phi(f_i \setminus f_i'). \end{align*}$$

Then

$$ \begin{align*} \delta(\phi(f_1) \otimes \dots \otimes \phi(f_r)) &= \sum_{f_1' \in J(f_1)} \cdots \sum_{f_r' \in J(f_r)} \phi(f_1' \cdots f_r') \otimes \phi(f_1 \setminus f_1') \otimes \dots \otimes \phi(f_r \setminus f_r') \end{align*} $$

and hence

$$ \begin{align*} \Delta \phi(t) &= \phi(t) \otimes 1 + \sum_{f_1' \in J(f_1)} \cdots \sum_{f_r' \in J(f_r)} \phi(f_1' \cdots f_r') \otimes \Lambda(\phi(f_1 \setminus f_1') \otimes \dots \otimes \phi(f_r \setminus f_r')) \\ &= \phi(t) \otimes 1 + \sum_{f_1' \in J(f_1)} \cdots \sum_{f_r' \in J(f_r)} \phi(f_1' \cdots f_r') \otimes \phi(\tilde B_{+}((f_1 \setminus f_1') \otimes \dots \otimes (f_r \setminus f_r')) \\ &= \phi(t) \otimes 1 + \sum_{f \in J(t)} \phi(f) \otimes \phi(t \setminus f) \\ &= (\phi \otimes \phi)(\Delta t) \end{align*} $$

as desired.

Example 3.5 Consider the (boring) 1-cocycle $B_{+}\mu \in \mathrm {Z}^1(\mathcal H, \mathcal H^{\otimes r})$ . By Theorem 3.4, there is a unique $\beta \colon \widetilde {\mathcal H}_r \to \mathcal H$ such that $\beta \tilde B_{+} = B_{+}\mu \beta ^{\otimes r} = B_{+}\beta \mu $ . It is easily seen that such a map is given by simply forgetting the edge decorations. More generally, consider any boring 1-cocycle $\Lambda \mu $ on any bialgebra H. Let $\psi \colon \mathcal H \to H$ be the unique map such that $\psi B_{+} = \Lambda \psi $ and let $\beta \colon \widetilde {\mathcal H}_r \to \mathcal H$ continue to denote the edge-undecorating map. Now, we observe that

$$\begin{align*}\psi \beta \tilde B_{+} = \psi B_{+} \mu\beta^{\otimes r} = \Lambda \psi^{\otimes r} \mu \beta^{\otimes r} = \Lambda \mu (\psi \beta)^{\otimes r} \end{align*}$$

so the universal map is simply $\phi = \psi \beta $ and the boring case is indeed boring.

In the next section, we will generalize the tubing expansion to the map from Theorem 3.4. As before, we will want a more general version including several 1-cocycles at once. For this it is convenient to allow tensor products indexed by arbitrary finite sets rather than just ordinals. Note that everything we did with $\mathbb {K}[L]$ still works here: we can identify $\mathbb {K}[L]^{\otimes E}$ with $\mathbb {K}[L_e\colon e \in E]$ , and all 1-cocycles are integro-differential operators as in Theorem 3.1, but slightly more notationally challenging.

Let I be a set and $\mathcal E = \{E_i\}_{i \in I}$ be a family of finite sets. Define a $(I, \mathcal E)$ -tree to be a rooted tree where each vertex is decorated by an element of I and each edge from a parent of type i is decorated by an element of $E_i$ . Denote the set of $(I, \mathcal E)$ -trees by $\widetilde {\mathcal T}(I, \mathcal E)$ . Analogously, we have $(I, \mathcal E)$ -forests and we denote the set of these by $\widetilde {\mathcal F}(I, \mathcal E)$ . Let $\widetilde {\mathcal H}_{I,\mathcal E}$ denote the free vector space on $\widetilde {\mathcal F}(I, \mathcal E)$ . As usual, we make this into a bialgebra with disjoint union as the product and a downset/upset coproduct preserving the decorations on the vertices and (remaining) edges. For $i \in I$ , let $\tilde B_{+}^{(i)}\colon \widetilde {\mathcal H}_{I, \mathcal E}^{\otimes E_i} \to \widetilde {\mathcal H}_{I, \mathcal E}$ be the operator that adds a new root joined to each component with the appropriate decoration. These are 1-cocycles by the same argument as in Proposition 3.3.

Theorem 3.6 Let A be a commutative algebra and $\{\Lambda _i\}_{i \in I}$ a family of linear maps, $\Lambda _i\colon A^{\otimes E_i} \to A$ . There exists a unique algebra morphism $\phi \colon \widetilde {\mathcal H}_{I, \mathcal E} \to A$ such that $\phi B_{+}^{(i)} = \Lambda _i \phi ^{\otimes E_i}$ . Moreover, if $\Lambda _i$ is a 1-cocycle for each i then $\phi $ is a bialgebra morphism.

Proof Analogous to Theorem 3.4.

Remark 3.7 Analogously to Example 3.5, if all of the 1-cocycles are boring there will be a factorization of $\phi $ through the map $\widetilde {\mathcal H}_{I,\mathcal E} \to \mathcal H_I$ that forgets edge decorations. However, we can make a more refined statement: if $\Lambda _i$ is boring, then $\phi $ is independent of the edge decorations for vertices of type i. Proving this is left as an exercise (though in the case that the target algebra is $\mathbb {K}[L]$ it will follow from the results of the next section).

3.3 The renormalization group equation and the invariant charge

The relationship between the renormalization group equation, the invariant charge, and the Riordan group generalizes to the case of distinguished insertion places. Naturally, this will come from a generalization of Lemma 2.28 to allow cocycles on tensor powers of the target bialgebra H. In this case, the statement gets more complicated but the proof is much the same. First, we need a generalization of Lemma 2.27.

Lemma 3.8 Let $\delta $ be the left coaction of $\mathsf {Rio}$ on $\mathsf {Rio}^{\otimes E}$ for E a finite set. Then, for any $\mathbf u \in \mathbb {K}^r$ and any exponent vector $\alpha \in \mathbb {Z}_{\geq 0}^E$ ,

$$\begin{align*}\delta\left(\bigotimes_{e \in E} Y(x)^{u_e} \Pi(x)^{\alpha_e} \right) = \sum_{j \ge 0} Y(x)^{|\mathbf u|} \Pi(x)^{j} \otimes [x^j] \bigotimes_{e \in E} Y(x)^{u_e} \Pi(x)^{\alpha_e}. \end{align*}$$

Proof Immediate from Lemma 2.27 by the definition of the coaction.

With this we can prove the following.

Lemma 3.9 Let H be a bialgebra, P be a set, and $\{E_p\}_{p \in P}$ a family of finite sets. For $p \in P$ let $\Lambda _p\colon H^{\otimes E_p} \to H$ be a 1-cocycle, let $\mathbf u_p = (u_e)_{e \in E_p}$ be a vector with $|\mathbf u| = 1$ , and let $\mathbf w_p = (w_e)_{e \in E_p}$ be a nonzero exponent vector. Suppose $\phi \colon \mathsf {Rio} \to H$ is an algebra morphism. Let $\Phi (x) = \phi (\Pi (x))$ and suppose $\phi (Y(x)) = F(x),$ where $F(x)$ is the unique solution to

$$\begin{align*}F(x) = 1 + \sum_{p \in P} \Lambda_p\left(\bigotimes_{e \in E_p} F(x)^{u_{e}} \Phi(x)^{\alpha_{e}}\right). \end{align*}$$

Then, for $n \ge 0$ , if $\phi $ is a bialgebra morphism when restricted to $\mathsf {FdB}^{(n)}$ , it is also a bialgebra morphism when restricted to $\mathsf {Rio}^{(n)}$ .

Proof The setup for our induction argument is identical to that in the proof of Lemma 2.28. Thus, supposing that $\phi $ is a bialgebra morphism on $\mathsf {Rio}^{(n-1)}$ for some $n \ge 1$ , we set out to show that $\phi $ preserves the coproduct of $y_n$ . Note that we have no $y_n$ in $[x^n] \bigotimes _{e \in E_p} Y(x)^{u_e} \Pi (x)^{w_e}$ since $|\mathbf w_p|> 0$ . Thus, by Lemma 3.8 we have

$$ \begin{align*} \Delta \phi(y_n) &= \Delta([x^n]F(x)) \\ &= [x^n] F(x) \otimes 1 + \sum_{p \in P} (\mathrm{id} \otimes \Lambda_p) [x^n]\delta\left( \bigotimes_{e \in E_p} F(x)^{u_{e}} \Phi(x)^{w_{e}}\right) \\ &= [x^n] F(x) \otimes 1 + \sum_{p \in P} \sum_{j \ge 0} [x^n] F(x)\Phi(x)^j \otimes \Lambda_p \left([x^j] \bigotimes_{e \in E_p} F(x)^{u_{e}} \Phi(x)^{w_{e}}\right) \\ &= \sum_{j \ge 0} [x^n] F(x)\Phi(x)^j \otimes [x^j] F(x) \end{align*} $$

as desired.

With Lemma 3.9 in mind we can formulate an appropriate notion of invariant charge for DSEs with distinguished insertion places. We want a series $Q(x)$ to play the role of $\Phi (x)$ in the statement of the lemma. In the case of a single equation (3.2), we see that the condition we want is that for some $s \in \mathbb {K}$ , the insertion exponents can be written in the form

(3.5) $$ \begin{align} \mu_{e} = u_e + sw_e, \end{align} $$

where $u_e \in \mathbb {K}$ and $\alpha _e \in \mathbb {Z}_{\geq 0}$ are such that

(3.6) $$ \begin{align} \sum_{e \in E_p} u_e = 1 \end{align} $$

and

(3.7) $$ \begin{align} \sum_{e \in E_p} w_e = w_p \end{align} $$

for each p. In this case, we take $Q(x) = xT(x)^s$ as in the case of a single ordinary DSE. We can then write our (combinatorial) equation as

(3.8) $$ \begin{align} T(x) = \sum_{p \in P} \tilde B_{+}\left(\bigotimes_{e \in E_p} T(x)^{u_e} Q(x)^{\alpha_e}\right). \end{align} $$

Remark 3.10 The choice of exactly how to write the insertion exponents in the form (3.5) is not unique in general. However, the value of s and hence of $Q(x)$ does not depend on this choice, because the overall insertion exponents satisfy $\mu _p = 1 + sw_p$ regardless.

Recall our original motivating example was equation (3.1) which has m edge types and all insertion exponents equal to $-$ 1. Thus, we here have $s = -(1+m)$ . We can write it in the form (3.5) by choosing one edge type $e_0$ to have $u_{e_0} = m$ and $w_{e_0} = 1$ , and all others to have $u_e = -1$ and $w_e = 0$ . Thus, while writing this equations in the form (3.8) is convenient for our purposes in this section, it does involve arbitrarily breaking the symmetry of the original equation.

For systems the situation is similar. For $e \in E_p,$ where $p \in P_i$ , we want the insertion exponent vector to satisfy a relation

$$\begin{align*}\mu_e = u_e1_i + \alpha_e \mathbf s, \end{align*}$$

where $u_e$ and $\alpha _e$ still satisfy (3.6) and (3.7). Thus, the form of our system is

(3.9) $$ \begin{align} T_i(x) = \sum_{p \in P_i} \tilde B_{+}^{(p)}\left(\bigotimes_{e \in E_p} T_i(x)^{u_e} Q(x)^{\alpha_e}\right), \end{align} $$

where as before

$$\begin{align*}Q(x) = x\prod_{i \in I} T_i(x)^{s_i}. \end{align*}$$

With the setup done, we can state the main result of this section. This proves a conjecture of Nabergall [Reference Nabergall26, Conjecture 4.2.3] which corresponds to the case all insertion exponents equal $-$ 1.

Theorem 3.11 Let $\mathbf T(x) \in \widetilde {\mathcal H}_{P,\mathcal E}[[x]]^I$ be the solution to the combinatorial Dyson–Schwinger system (3.9). Then, for any $i \in I$ , the map $\phi _i\colon \mathsf {Rio} \to \widetilde {\mathcal H}_{P,\mathcal E}$ defined by $\phi _i(Y(x)) = T_i(x)$ and $\phi _i(\Pi (x)) = Q(x)$ is a bialgebra morphism. As a consequence, the solution $\mathbf G(x, L)$ to the corresponding Dyson–Schwinger system satisfies the renormalization group equations

$$\begin{align*}\left(\frac{\partial}{\partial L} - \beta(x) \frac{\partial}{\partial x} - \gamma_i(x)\right) G_i(x, L) = 0, \end{align*}$$

where $\gamma _i(x)$ is the linear term in L of $G_i(x, L)$ and

$$\begin{align*}\beta(x) = \sum_{i \in I} s_ix\gamma(x). \end{align*}$$

Proof This follows by an identical proof to Theorem 2.30 but using Lemma 3.9 in place of Lemma 2.28.

3.4 Reducing to ordinary DSEs

Before we embark on generalizing the tubing expansion to Dyson–Schwinger systems with distinguished insertion places, we will pause to consider an alternative approach, namely, transforming such systems into ordinary (single insertion place) Dyson–Schwinger systems. This is motivated by a conjecture of Nabergall [Reference Nabergall26, Conjecture 4.2.2] that the solution to (3.1) can also be obtained by an explicit linear variable substitution from an also explicit single insertion place DSE. This turns out to be false as demonstrated by the following.

We will take the case $m=2$ in (3.1) and index the coefficients of the Mellin transform as

$$\begin{align*}(\rho_1+\rho_2)F(\rho_1, \rho_2) = \sum_{i,j\geq 0}b_{i,j}\rho_1^i\rho_2^j. \end{align*}$$

Then, iterating (3.1) and calling the Green function in this case $\hat {G,}$ we obtain

$$ \begin{align*} \hat{G}(x,L) = \,& 1 + Lb_{0,0}x - (L^2b_{0,0}^2 + Lb_{0,0}(b_{0,1} + b_{1,0}))x^2 \\ & + \bigg(\frac{5}{3}L^3b_{0,0}^3 + \frac{7}{2}L^2b_{0,0}^2(b_{0,1} + b_{1,0}) \\ & \qquad + Lb_{0,0}(b_{0,1}^2 + 4b_{0,0}b_{0,2} + 2b_{0,1}b_{1,0} + b_{1,0}^2 + b_{0,0}b_{1,1} + 4b_{0,0}b_{2,0})\bigg)x^3 \\ & - \bigg(\frac{10}{3}L^4b_{0,0}^4 + 11L^3b_{0,0}^3(b_{0,1} + b_{1,0}) \\ & \qquad + L^2b_{0,0}^2\bigg(\frac{15}{2}b_{0,1}^2 + 20b_{0,0}b_{0,2} + 15b_{0,1}b_{1,0} + \frac{15}{2}b_{1,0}^2 + 5b_{0,0}b_{1,1} + 20b_{0,0}b_{2,0}\bigg) \\ & \qquad + Lb_{0,0}(b_{0,1}^3 + 15b_{0,0}b_{0,1}b_{0,2} + 28b_{0,0}^2b_{0,3} + 3b_{0,1}^2b_{1,0} + 15b_{0,0}b_{0,2}b_{1,0} \\ & \qquad \qquad + 3b_{0,1}b_{1,0}^2 + b_{1,0}^3 + 3b_{0,0}b_{0,1}b_{1,1} + 3b_{0,0}b_{1,0}b_{1,1} + 4b_{0,0}^2b_{1,2} \\ & \qquad \qquad + 15b_{0,0}b_{0,1}b_{2,0} + 15b_{0,0}b_{1,0}b_{2,0} + 4b_{0,0}^2b_{2,1} + 28b_{0,0}^2b_{3,0})\bigg)x^4 + O(x^5). \end{align*} $$

These calculations were done with SageMath, but at the cost of some tedium are also doable by hand. The single insertion place DSE would need to be (1.1) with $\mu =-2$ so that the total exponent agrees. Indexing the Mellin transform as after (1.1) and calculating likewise we obtain

$$ \begin{align*} G(x,L) = \,& 1 + La_0x - (L^2a_0^2 + 2La_0a_1)x^2 + (\frac{5}{3}L^3a_0^3 + 7L^2a_0^2a_1 + La_0(4a_1^2 + 10a_0a_2))x^3 \\ & - \bigg(\frac{10}{3}L^4a_0^4 + 22L^3a_0^3a_1 + L^2a_0^2(30a_1^2 + 50a_0a_2) \\ & \qquad + La_0(8a_1^3 + 72a_0a_1a_2 + 80a_0^2a_3)\bigg)x^4 + O(x^5). \end{align*} $$

From the coefficients of x we see that $a_0= b_{0,0}$ . Using that, from the coefficient of $x^2$ we see that $a_1= (b_{1,0}+b_{0,1})/2$ , and likewise from the coefficient of $x^3$ we get $a_2 = (4b_{0,2}+b_{1,1}+4b_{2,0})/10$ .

The problem comes with the coefficient of $x^4$ . Making the already established substitutions and taking the difference of the coefficient of $x^4$ in $\hat {G}$ and G we obtain

$$ \begin{align*} 0 = \,& \frac{1}{5}(3b_{0,1}b_{0,2} + 140b_{0,0}b_{0,3} + 3b_{0,2}b_{1,0} - 3b_{0,1}b_{1,1} - 3b_{1,0}b_{1,1} + 20b_{0,0}b_{1,2} \\ & \quad + 3b_{0,1}b_{2,0} + 3b_{1,0}b_{2,0} + 20b_{0,0}b_{2,1} + 140b_{0,0}b_{3,0} - 400b_{0,0}a_3)Lb_{0,0}^2. \end{align*} $$

So we see that solving for $a_3$ would involve inverting $b_{0,0,}$ and so no linear substitution into $G(x,L)$ can give $\hat {G}(x,L),$ and so, in particular, no substitution of the form in [Reference Nabergall26, Conjecture 4.2.2] can do so. This disproves the conjecture.

Despite this negative result, there are some cases in which it is possible to do this. One example is when all the cocycles that appear in the system are boring: Remark 3.7 essentially says that this is possible even at the level of combinatorial Dyson–Schwinger systems, see the discussion at the beginning of Section 3.5 for further details.

There is one more case we know of in which such a transformation exists, namely, when the overall insertion exponents are all equal to 1, or equivalently $\beta (x) = 0$ . In the case of undistinguished insertion places, this would be a linear equation. For distinguished insertion places, we call such an equation quasi-linear. Explicitly, quasi-linear DSEs are simply the special case of (3.2) in which

$$\begin{align*}\sum_{e \in E_p} \mu_e = 1 \end{align*}$$

for each p.

Quasi-linear DSEs turn out to inherit some of the special properties of linear DSEs. To see why, we first consider the multiple-insertion-place analog of (2.20):

$$\begin{align*}\frac{\partial}{\partial L} G(x, L) = \sum_{p \in P} x^{w_p} A_p\left(\frac{\partial}{\partial L_e} \colon e \in E_p\right) \prod_{e \in E_p} G(x, L_e)^{\mu_e}\bigg|_{\text{all } L_e = L}. \end{align*}$$

For a general DSE in multiple insertion places, any attempt to follow the template of Section 2.5 stops here. The RGE allows us to replace $\partial /\partial L_e$ with a differential operator in x acting on the factor for e on the right, but to get an expression like (2.21) we would need an operator acting on the whole product. However, in the quasi-linear case, we only get multiplication by a series with no differential part, so we can press on to get

$$\begin{align*}\frac{\partial}{\partial L} G(x, L) = \sum_{p \in P} x^{w_p} A_p\left(\mu_e\gamma(x) \colon e \in E_p\right) \prod_{e \in E_p} G(x, L)^{\mu_e} \end{align*}$$

and then set $L = 0$ to get the analog of (2.24):

(3.10) $$ \begin{align} \gamma(x) = \sum_{p \in P} x^{w_p} A_p(\mu_e \gamma(x)\colon e \in E_p). \end{align} $$

But note that this functional equation is actually of the same form as (2.24) and so in fact also arises from an ordinary linear DSE!

Theorem 3.12 If $\sum _{e \in E_p} \mu _e = 1$ for each p, the solution to (3.2) is the same as the solution to the ordinary linear DSE

$$\begin{align*}G(x, L) = 1 + \sum_{p \in P} x^{w_p} \int_0^L \tilde A_p(\partial/\partial u) G(x, u)\,du, \end{align*}$$

where

$$\begin{align*}\tilde A_p(L) = A_p(\mu_e L\colon e \in E). \end{align*}$$

Proof By (2.24) and (3.10) they have the same anomalous dimension, which is sufficient because of Remark 2.18.

3.5 Tubing solutions to Dyson–Schwinger equations with multiple insertion places

Finally, our main result is tubing expansions of solutions to DSEs with multiple insertion places. Fix a family $\mathcal E = \{E_i\}_{i \in I}$ of finite sets. For each i, introduce indeterminates ${\mathbf L}_i = (L_e\colon e \in E_i)$ and choose a 1-cocycle $\Lambda _i \in \mathrm {Z}^1(\mathbb {K}[L], \mathbb {K}[{\mathbf L}_i])$ . Let $A_i({\mathbf L}_i)$ be the corresponding power series given by Theorem 3.1. We will choose to expand the series using multinomial coefficients:

$$\begin{align*}A_i({\mathbf L}_i) = \sum_{\alpha \in \mathbb{Z}_{\geq 0}^{E_i}} a_{i,\alpha} \binom{|\alpha|}{\alpha} \mathbf L_i^\alpha. \end{align*}$$

This curious-looking convention can be justified by the observation that if $\Lambda _i$ is boring then, by Remark 3.2, we can write

$$\begin{align*}A_i({\mathbf L}_i) = B\left(\sum_{e \in E_i} L_e\right) \end{align*}$$

for some series $B(L)$ . Our convention is such that in this case, we have $a_{i,\alpha } = [L^{|\alpha |}] B(L)$ . In particular, this will make our expansion manifestly identical to Theorem 2.37 in the case that all of the cocycles are boring, and more generally make it obviously independent of the edge decorations for those vertex types with boring 1-cocycles as suggested by Remark 3.7.

To generalize the tubing expansion to this case, we need the appropriate generalizations of the statistics that appear in Theorem 2.37. Let t be an $(I, \mathcal E)$ -tree and $\tau $ be a binary tubing of t. Suppose $t' \in \tau $ is an upper tube and $t"$ the corresponding lower tube (i.e., the unique lower tube such that $t' \cup t" \in \tau $ ). We define the type of $t'$ to be the decoration of the first edge on the unique path from $\operatorname {rt} t'$ to $\operatorname {rt} t"$ (see Figure 2). By construction, the type is an element of $E_i,$ where $i = d(t')$ is the decoration of the root vertex of $t'$ . Note we only assign types to upper tubes, not lower tubes. For each vertex $v \in t$ and edge type $e \in E_{d(v)}$ define the e-rank $\operatorname {rk}_e(\tau , v)$ to be the number of upper tubes of type e rooted at v. Collect these together to get the rank vector $\operatorname {\mathbf {rk}}(\tau , v) \in \mathbb {Z}_{\geq 0}^{E_{d(v)}}$ . Clearly, $|\operatorname {\mathbf {rk}}(\tau , v)| = \operatorname {rk}(\tau , v)$ . Then, we define the Mellin monomial of $\tau $ to be

$$\begin{align*}\operatorname{mel}(\tau) = \prod_{\substack{v \in t \\ v \ne \operatorname{rt} t}} a_{d(v), \operatorname{\mathbf{rk}}(\tau, v)}. \end{align*}$$

Figure 2 An upper tube and its corresponding lower tube. The type of the upper tube is the decoration of the highlighted edge.

The analog of the b-statistic is slightly more complicated. For $1 \le k \le b(\tau ),$ write $\beta _i^k(\tau )$ the number of upper tubes of type i containing (hence rooted at) $\operatorname {rt} t$ , excluding the outermost $k - 1$ upper tubes. Collect these into a vector $\beta ^k(\tau )$ (which for lack of a better name we simply term the kth $\beta $ -vector of $\tau $ ). Thus, $\beta ^1(\tau ) = \operatorname {\mathbf {rk}}(\tau , \operatorname {rt} t)$ and $|\beta ^k(\tau )| = b(\tau ) - k$ . We are now ready to state our expansion.

Theorem 3.13 With the above setup, the unique map $\phi \colon \widetilde {\mathcal H}_{I, \mathcal E} \to \mathbb {K}[L]$ satisfying $\phi \tilde B_{+}^{(i)} = \Lambda _i \phi $ is given on trees by the formula

(3.11) $$ \begin{align} \phi(t) = \sum_{\tau \in \mathrm{Tub}(t)} \operatorname{mel}(\tau) \sum_{k=1}^{b(\tau)} a_{d(t),\beta^k(\tau)} \frac{L^k}{k!}. \end{align} $$

To prove this, we follow the structure of the argument as in [Reference Balduf, Cantwell, Ebrahimi-Fard, Nabergall, Olson-Harris and Yeats4, Section 4], with some added complexity. We will temporarily write $\psi $ for the right side of (3.11). Let $\sigma $ be the linear term of $\psi $ , i.e.,

$$\begin{align*}\sigma(t) = \sum_{\tau \in \mathrm{Tub}(t)}a_{d(t), \beta^1(\tau)} \operatorname{mel}(\tau). \end{align*}$$

and $\sigma $ vanishes on disconnected forests.

Lemma 3.14 For any tree t and $k \ge 1$ ,

$$\begin{align*}\sigma^{*k}(t) = \sum_{\substack{\tau \in \mathrm{Tub}(t) \\ b(\tau) \ge k}} a_{d(t), \beta^k(\tau)} \operatorname{mel}(\tau). \end{align*}$$

(Note that, $\sigma ^{*k}$ can be nonzero on disconnected forests, but we are claiming this equality only for trees (see also the remark after [Reference Balduf, Cantwell, Ebrahimi-Fard, Nabergall, Olson-Harris and Yeats4, Lemma 4.3]).)

Proof By induction on k. The base case is true by definition. Then,

$$\begin{align*}\sigma^{*k+1}(t) = \sum_{f \in J(t)} \sigma(f) \sigma^{*k}(t \setminus f) \end{align*}$$

but $\sigma $ is an infinitesimal character so $\sigma (f) \ne 0$ only if f is actually a tree. Moreover, clearly $\sigma ^{*k}(1) = 0$ , so we can restrict the sum to proper subtrees $t'$ . Then, inductively we have

$$ \begin{align*} \sigma^{*k+1}(t) &= \sum_{t'} \sigma(t') \sigma^{*k}(t \setminus t') \\ &= \sum_{t'} \sum_{\tau' \in \mathrm{Tub}(t')} \sum_{\substack{\tau" \in \mathrm{Tub}(t \setminus t') \\ b(\tau") \ge k}} a_{d(t'), \beta^1(\tau')} \operatorname{mel}(\tau') a_{d(t), \beta^k(\tau")} \operatorname{mel}(\tau"). \end{align*} $$

By the recursive construction of tubings (Proposition 2.35), $\tau '$ and $\tau "$ uniquely determine a tubing $\tau \in \mathrm {Tub}(t)$ . Note that for any vertex $v \in t'$ , the upper tubes of $\tau $ rooted at v are the same as those of $\tau '$ , so $\operatorname {\mathbf {rk}}(\tau , v) = \operatorname {\mathbf {rk}}(\tau ', v)$ . The same is true for vertices of $\tau "$ other than $\operatorname {rt} t$ . Thus, we have

$$\begin{align*}\operatorname{mel}(\tau) = \operatorname{mel}(\tau')a_{d(t'), \operatorname{\mathbf{rk}}(\tau, \operatorname{rt} t')} \operatorname{mel}(\tau") = \operatorname{mel}(\tau') a_{d(t'), \beta^1(\tau')} \operatorname{mel}(\tau"). \end{align*}$$

Moreover, since $\beta ^k$ ignores the outermost the $k - 1$ upper tubes containing $\operatorname {rt} t$ by definition, we have $\beta ^{k+1}(\tau ) = \beta ^k(\tau ")$ . Finally, there is one additional tube in $\tau $ containing the root, so $b(\tau ) = b(\tau ") + 1$ . Hence, the triple sum simplifies to

$$\begin{align*}\sigma^{*k+1}(t) = \sum_{\substack{\tau \in \mathrm{Tub}(t) \\ b(\tau) \ge k+1}} a_{d(t), \beta^{k+1}(\tau)} \operatorname{mel}(\tau) \end{align*}$$

as wanted.

For notational convenience, for $\alpha \in \mathbb {Z}_{\geq 0}^{E_i}$ let us write $\sigma ^{[\alpha ]}$ for the linear form on $\mathcal H_{D, \mathcal I}^{\otimes I_d}$ given by

$$\begin{align*}\sigma^{[\alpha]} = \bigotimes_{e \in E_d} \sigma^{*\alpha_e}. \end{align*}$$

Note that the coalgebra structure on $\mathcal H_{I, \mathcal E}^{\otimes E_i}$ gives a convolution product on linear forms; one easily checks that $\sigma ^{[\alpha + \beta ]} = \sigma ^{[\alpha ]} * \sigma ^{[\beta ]}$ .

Lemma 3.15 For $i \in I$ ,

$$\begin{align*}\sigma \tilde B_{+}^{(i)} = \sum_{\alpha \in \mathbb{Z}_{\geq 0}^{E_i}} \binom{|\alpha|}{\alpha} a_{i, \alpha} \sigma^{[\alpha]}. \end{align*}$$

Proof For each $i \in I$ and $\alpha \in \mathbb {Z}_{\geq 0}^{E_i}$ let $\sigma _{i, \alpha }$ be the infinitesimal character defined by

$$\begin{align*}\sigma_{i, \alpha}(t) = \begin{cases} \sum_{\tau \in \mathrm{Tub}(t), \beta^1(\tau) = \alpha} \operatorname{mel}(\tau) & d(t) = i \\ 0 & \text{otherwise} \end{cases} \end{align*}$$

so that we have

$$\begin{align*}\sigma = \sum_{i \in I} \sum_{\alpha \in \mathbb{Z}_{\geq 0}^{E_i}} a_{i, \alpha} \sigma_{i, \alpha}. \end{align*}$$

Note that we have $\sigma _{i, \alpha }\tilde B_{+}^{(j)} = 0$ when $i \ne j$ , so to get the desired formula it suffices to show that $\sigma _{i, \alpha }\tilde B_{+}^{(i)} = \binom {|\alpha |}{\alpha } \sigma ^{[\alpha ]}$ for $i \in I$ and $\alpha \in \mathbb {Z}_{\geq 0}^{E_i}$ . We do this by induction on $m = |\alpha |$ .

Now, for $\alpha = 0$ we have that $\sigma _{i,\alpha }(t) = 1$ if t is the one-vertex tree with decoration i and 0 otherwise. Of course, $\sigma ^{[0]}$ is 1 when each forest is empty and 0 otherwise, so we do indeed see that $\sigma _{i,0}\tilde B_{+}^{(i)} = \sigma ^{[0]}$ as wanted.

Suppose now that $m> 0$ and that the desired identity holds for smaller values. Then, $\sigma _{i, \alpha }$ vanishes on one-vertex trees, so all tubings of interest have at least one upper tube. For $e \in E_i$ , let $\sigma _{i,\alpha }^{e}(t)$ be the sum only for those tubings, where the outermost upper tube has type e. Thus,

$$\begin{align*}\sigma_{i, \alpha} = \sum_{e \in E_i} \sigma_{i,\alpha}^{e}. \end{align*}$$

Note $\sigma ^e_{i, \alpha } = 0$ when $\alpha _e = 0$ , as in this case there must be no tube of type i containing the root. Suppose now that $\alpha _e \ne 0$ and $t = \tilde B_{+}^{(i)}\left (\bigotimes _{e' \in E_i} f_{e'}\right )$ . Let $\tau $ be a binary tubing of t which recursively corresponds to $(\tau ', \tau ")$ . Then, the outermost upper tube of $\tau $ is type e precisely when the lower subtree $t'$ is contained in $f_e$ . Moreover, in this case we have $\beta ^1(\tau ) = \beta ^1(\tau ") + 1_e$ , where $1_e$ is the indicator vector for e. Then, we have

$$\begin{align*}\sigma^e_{i, \alpha}(t) = \sum_{\substack{t' \subseteq f_i \\ \text{subtree}}} \sigma(t') \sigma_{i, \alpha - 1_i}(t \setminus t'). \end{align*}$$

Now $t \setminus t' = \tilde B_{+}^{(i)}\left (\bigotimes _{e'} f^{\prime }_{e'}\right )$ where $f^{\prime }_e = f_e \setminus t'$ and $f^{\prime }_{e'} = f_{e'}$ for $e \ne e'$ . It follows that

$$ \begin{align*} \sigma^e_{i, \alpha}B_{+}^{(i)} &= \sigma^{[1_e]} * \sigma_{i, \alpha - 1_e} \tilde B_{+}^{(i)} \\ &= \sigma^{[1_e]} * \binom{m-1}{\alpha - 1_e} \sigma^{[\alpha - 1_e]} \\ &= \binom{m-1}{\alpha - 1_e} \sigma^{[\alpha]}. \end{align*} $$

Finally, by summing over the values of e we get

$$\begin{align*}\sigma_{i, \alpha} \tilde B_{+}^{(i)} = \sum_{e \in E_i} \binom{m - 1}{\alpha - 1_e} \sigma^{[\alpha]} = \binom{m}{\alpha} \sigma^{[\alpha]} \end{align*}$$

by the multinomial analog of the Pascal recurrence.

Proof of Theorem 3.13

Let $\Psi _i\colon \widetilde {\mathcal H}_{I, \mathcal E}^{\otimes E_i} \to \mathbb {K}[{\mathbf L}_i]$ be given by

$$\begin{align*}\Psi_i\left(\bigotimes_{e \in E_i} f_e\right) = \prod_{e \in E_i} \left(\psi(f_e)\big|_{L = L_e}\right). \end{align*}$$

This is simply the map $\psi ^{\otimes E_i}$ carried through the identification of $\mathbb {K}[L]^{\otimes E_i}$ with $\mathbb {K}[{\mathbf L}_i]$ . Thus, our goal is to show $\psi \tilde B_{+}^{(i)} = \Lambda _i \Psi _i$ ; by uniqueness this shows $\phi = \psi $ . Note that since $\psi $ is an algebra morphism, we have

$$\begin{align*}\Psi_i\left(\bigotimes_{e \in E_i} f_e\right)\bigg|_{L_e = L\,\forall e \in E_i} = \prod_{e \in E_i} \psi(f_e) = \psi\left(\prod_{e \in E_i} f_e\right). \end{align*}$$

By Lemma 3.14 we have $\psi = \exp _{*}(L\sigma )$ . Thus,

$$\begin{align*}\frac{d}{d L} \psi \tilde B_{+}^{(i)} = (\psi * \sigma) \tilde B_{+}^{(i)} = \psi *_\delta \sigma \tilde B_{+}^{(i)}, \end{align*}$$

where $\delta $ is the coaction and the second equality is by Item 2.13 (i). But observe that

$$ \begin{align*} \left(\psi *_\delta \sigma\tilde B_{+}^{(i)}\right)\left(\bigotimes_{e \in E_i} f_e\right) &= \sum_{\substack{f_e^{\prime} \in J(f_e) \\ \forall e \in E_i}} \psi\left(\prod_{e \in E_i} f^{\prime}_e\right) \sigma\left(\tilde B_{+}^{(i)}\left(\bigotimes_{e \in e_i} (f_e \setminus f^{\prime}_e) \right)\right) \\ &= \sum_{\substack{f_e^{\prime} \in J(f_e) \\ \forall e \in E_i}} \Psi_i\left(\bigotimes_{e \in E_i} f^{\prime}_e\right) \sigma\left(\tilde B_{+}^{(i)}\left(\bigotimes_{e \in e_i} (f_e \setminus f^{\prime}_e) \right)\right) \bigg|_{L_e = L\,\forall e \in E_i} \\ &= \left(\Psi_i * \sigma \tilde B_{+}^{(i)}\right)\left(\bigotimes_{e \in E_i} f_e\right)\bigg|_{L_e = L\,\forall e \in E_i}. \end{align*} $$

Thus, we have

$$ \begin{align*} \frac{d}{d L} \psi \tilde B_{+}^{(i)} &= \Psi_i * \sigma \tilde B_{+}^{(i)} \big|_{L_e = L\,\forall e \in E_i} \\ &= \left(\Psi_i * \sum_{\alpha \in \mathbb{Z}_{\geq 0}^{E_i}} \binom{|\alpha|}{\alpha} a_{i,\alpha} \sigma^{[\alpha]}\right)\bigg|_{L_e = L\,\forall e \in E_i} & \text{by Lemma}~{{3.15}} \\ &= A_i(\partial/\partial L_e\colon e \in E_i) \Psi\big|_{L_e = L\,\forall e \in E_i} \\ &= \frac{d}{dL} \Lambda_i\Psi. \end{align*} $$

Since we also have

$$\begin{align*}\psi(\tilde B_{+}^{(i)} 1) = a_{i,0} = \Lambda_i(1) \end{align*}$$

this implies $\psi \tilde B_{+}^{(i)} = \Lambda _i \Psi _i$ , as wanted.

Finally, to solve (3.2) and (3.3), we need to lift them to combinatorial versions on the Hopf algebra $\widetilde {\mathcal H}_{P, \mathcal E}$ we introduced above, solve those, and then apply Theorem 3.13 to get a solution to the original equations. This time we will work in the full generality of systems from the start. The combinatorial version of the system (3.3) is

(3.12) $$ \begin{align} T_i(x) = 1 + \sum_{p \in P_i} x^{w_p} \tilde B_{+}^{(p)}\left(\bigotimes_{e \in E_p} \mathbf T(x)^{\mu_e}\right). \end{align} $$

As in Section 2.4, we are slightly abusing notation here by neglecting to notate the obvious (but non-injective) map $\widetilde {\mathcal H}_{P,\mathcal E}[[x]]^{\otimes E_p} \to \widetilde {\mathcal H}_{P,\mathcal E}^{\otimes E_p}[[x]]$ . The formula generalizes that of Theorem 2.6. Each vertex will now have several outdegree vectors, one for each edge type: we write $\operatorname {od}_i(v, e)$ for the number of children of v which have decorations lying in $P_i$ and such that the edge connecting them has decoration e. These are collected together into the outdegree vector $\operatorname {\mathbf {od}}(v, e) \in \mathbb {Z}_{\geq 0}^I$ .

Theorem 3.16 The unique solution to (3.12) is

$$\begin{align*}T_i(x) = 1 + \sum_{t \in \mathcal T(P_i)} \left(\prod_{v \in t} \prod_{e \in E_{d(v)}} \mu_e^{\underline{\operatorname{\mathbf{od}}(v, e)}}\right) \frac{tx^{w(t)}}{|\mathrm{Aut}(t)|}. \end{align*}$$

Proof Analogous to Proposition 2.3.

As before, we can get a combinatorial expansion for the solution of the DSE by applying our results on 1-cocycles.

Theorem 3.17 The unique solution to (3.3) is

$$\begin{align*}G_i(x, L) = 1 + \sum_{t \in \mathcal T(P_i)} \left(\prod_{v \in t} \prod_{e \in E_{d(v)}} \mu_e^{\underline{\operatorname{\mathbf{od}}(v, e)}}\right)\sum_{\tau \in \mathrm{Tub}(t)} \operatorname{mel}(\tau) \sum_{k=1}^{b(\tau)} a_{d(t),\beta^k(\tau)} \frac{x^{w(t)}L^k}{|\mathrm{Aut}(t)|k!}. \end{align*}$$

Proof Immediate from Theorems 3.16 and 3.13.

4 Conclusion

We have given a method to give combinatorially indexed and understandable series solutions to DSEs and systems thereof, with distinguished insertion places. This concludes the single scale portion of the long-standing program one of us has to solve DSEs combinatorially. These solutions are powerful in that they give us combinatorial control over the expansions. So far, this has resulted in physically interesting results on the leading log hierarchy [Reference Courtiel and Yeats14, Reference Courtiel and Yeats15] and on resurgence [Reference Borinsky, Dunne and Yeats8], both of which would be interesting to investigate in the multiple insertion place case.

Paul Balduf has done some numerical computations of solutions to DSEs with two insertion places. See [Reference Balduf3, pp. 44, 45] where he plots the two parameters controlling the exponential growth rate of the coefficients of the anomalous dimension (what he calls $\mu $ and $\lambda $ in the form $(-\lambda )^{-n}\Gamma (n-\mu )$ ) as a function of the power of the insertions (in our notation $\mu _1-1$ and $\mu _2-1$ ). Interestingly, when the two insertion places degenerate into one by setting the power of one of the insertions to $-1$ (that is, one of the $\mu _i = 0$ ), then the function does not appear to be differentiable based on the result of the numerical computation. It is not clear why this should be, and we certainly have no combinatorial understanding in this direction at present.

The program of solving DSEs with combinatorial expansions has also been interesting for pure combinatorics, first contributing to a renaissance in chord diagram combinatorics, and now suggesting connections to other notions of tubings in combinatorics (e.g., [Reference Carr and Devadoss11, Reference Galashin21]), which deserve further investigation.

The way in which trees with edge decorations and their Hopf algebraic structures appear in the study of regularity structures, as described in [Reference Bruned, Hairer and Zambotti10], has a very similar flavour to what we see in our work on DSEs. It would be very interesting to investigate a more precise connection between these areas.

The main case of DSEs still outstanding is the case of non-single scale insertions as occurs in general vertex insertions. Non-single scale insertions can be addressed in the present framework by choosing one scale and then capturing the remaining kinematic dependence in dimensionless parameters, however, this is inelegant in requiring a choice, and does not give any combinatorial or algebraic insight into the interplay of the different scales.

We have also used the Riordan group to more clearly explain the connection between DSEs and the renormalization group equation clarifying the known results in the single insertion case and generalizing to the multiple insertion place case. We proved and disproved conjectures of Nabergall [Reference Nabergall26].

Footnotes

K.Y. is supported by an NSERC Discovery grant and by the Canada Research Chairs program. K.Y. thanks Paul Balduf for useful conversations. N.O.-H. thanks Lukas Nabergall for sharing code and for many useful discussions in the early stages of this work. Thanks also to the referee for their suggestions.

1 A graded vector space is connected if the $0$ th graded piece is isomorphic to the underlying field. The reader is cautioned not to confuse this with the combinatorial notion of connectedness for trees.

2 In the physical application, P is the set of primitive Feynman diagrams, $w_p$ is the dimension of the cycle space of p, and $\mu _p$ is related to the number of ways to insert P into itself, but note that we are not even requiring $\mu _p$ to be an integer. In our approach, we essentially treat the insertion exponents as indeterminates.

3 primitive in a renormalization Hopf algebra, or from a physics perspective, subdivergence-free

4 We will only show sufficiency; for necessity (see [Reference Foissy19, Proposition 10] although note that the setup there is somewhat different from ours.

5 In [Reference Balduf, Cantwell, Ebrahimi-Fard, Nabergall, Olson-Harris and Yeats4] the equivalent statistic $b(\tau , v) = \operatorname {rk}(\tau , v) + 1,$ counting the total number of tubes rooted at v was used instead. It has since become clear that the rank is really the fundamental quantity.

References

Bacher, R., Sur le groupe d’interpolation. Preprint, 2006, arXiv:math/0609736.Google Scholar
Balduf, P.-H., Dyson–Schwinger equations in minimal subtraction. Annales de l’Institut Henri Poincaré—Combin, Phys. Interact. (2023). Published online first.10.4171/aihpd/169CrossRefGoogle Scholar
Balduf, P.-H., Variations of single-kernel Dyson-Schwinger equations. 2024. https://paulbalduf.com/wp-content/uploads/2024/08/2024_08_Balduf_Bonn.pdf.Google Scholar
Balduf, P.-H., Cantwell, A., Ebrahimi-Fard, K., Nabergall, L., Olson-Harris, N., and Yeats, K., Tubings, chord diagrams, and Dyson–Schwinger equations. J. London Math. Soc. 110(2024), e70006.10.1112/jlms.70006CrossRefGoogle Scholar
Bellon, M. P., An efficient method for the solution of Schwinger–Dyson equations for propagators. Nucl. Phys. B 826(2010), 522531.10.1016/j.nuclphysb.2009.11.002CrossRefGoogle Scholar
Bergbauer, C. and Kreimer, D., Hopf algebras in renormalization theory: Locality and Dyson–Schwinger equations from Hochschild cohomology . In Nyssen, L. (ed.), Physics and number theory, 10 in IRMA Lectures in Mathematics and Theoretical Physics, EMS Press, Strasbourg, 2006, pp. 133164.CrossRefGoogle Scholar
Borinsky, M., Feynman graph generation and calculations in the Hopf algebra of Feynman graphs. Comput. Phys. Commun. 185(2014), 33173330.10.1016/j.cpc.2014.07.023CrossRefGoogle Scholar
Borinsky, M., Dunne, G. V., and Yeats, K., Tree-tubings and the combinatorics of resurgent Dyson-Schwinger equations. Preprint, 2024. arXiv:2408.15883.Google Scholar
Broadhurst, D. J. and Kreimer, D., Exact solutions of Dyson–Schwinger equations for iterated one-loop integrals and propagator-coupling duality. Nucl. Phys. B 600(2001), 403422.Google Scholar
Bruned, Y., Hairer, M., and Zambotti, L., Algebraic renormalisation of regularity structures. Invent. Math. 215(2019), 10391156.10.1007/s00222-018-0841-xCrossRefGoogle Scholar
Carr, M. and Devadoss, S. L., Coxeter complexes and graph-associahedra. Topol. Appl. 153(2006), 21552168.Google Scholar
Chapoton, F. and Livernet, M., Pre-Lie algebras and the rooted trees operad. Int. Math. Res. Notices 2001(2001), no. 8, 395408.10.1155/S1073792801000198CrossRefGoogle Scholar
Connes, A. and Kreimer, D., Hopf algebras, renormalization and noncommutative geometry. Commun. Math. Phys. 199(1998), 203242.CrossRefGoogle Scholar
Courtiel, J. and Yeats, K., Terminal chords in connected chord diagrams. Annales de l’Institut Henri Poincaré—Combin., Phys. Interact. 4(2017), 417452.Google Scholar
Courtiel, J. and Yeats, K., Next-to ${}^k$ leading log expansions by chord diagrams. Commun. Math. Phys. 377(2020), 469501.10.1007/s00220-020-03691-7CrossRefGoogle Scholar
Courtiel, J., Yeats, K., and Zeilberger, N., Connected chord diagrams and bridgeless maps. Electron. J. Combin. 26(2019), P4.37, 1–56.10.37236/7400CrossRefGoogle Scholar
Doi, Y., Homological coalgebra. J. Math. Soc. Japan 33(1981), 3150.10.2969/jmsj/03310031CrossRefGoogle Scholar
Dugan, W. T., Sequences of trees and higher-order renormalization group equations. MMath thesis, University of Waterloo, 2019.Google Scholar
Foissy, L., General Dyson–Schwinger equations and systems. Commun. Math. Phys. 327(2014), 151179.Google Scholar
Foissy, L., Multigraded Dyson-Schwinger systems. J. Math. Phys. 61(2020), 51703.10.1063/5.0005480CrossRefGoogle Scholar
Galashin, P., $p$ -associahedra . Sel. Math. 30(2023), 1–63.Google Scholar
Hihn, M. and Yeats, K., Generalized chord diagram expansions of Dyson–Schwinger equations. Annales de l’Institut Henri Poincaré–Combin., Phys. Interact. 6(2019), 573605.10.4171/aihpd/79CrossRefGoogle Scholar
Knuth, D., Searching and sorting. Vol. 3. 2 ed., Addison-Wesley, Reading, MA, 1998.Google Scholar
Kreimer, D., On overlapping divergences. Commun. Math. Phys. 204(1999), 669689.CrossRefGoogle Scholar
Marie, N. and Yeats, K., A chord diagram expansion coming from some Dyson-Schwinger equations. Commun. Number Theory Phys. 7(2014), 251291.10.4310/CNTP.2013.v7.n2.a2CrossRefGoogle Scholar
Nabergall, L., Enumerative perspectives on chord diagrams. Ph.D. thesis, University of Waterloo, 2022.Google Scholar
Olson-Harris, N., Some applications of combinatorial Hopf algebras to integro-differential equations and symmetric function identities. Ph.D. thesis, University of Waterloo, 2024.Google Scholar
Panzer, E., Hopf-algebraic renormalization of Kreimer’s toy model. Masters thesis, Humboldt University of Berlin, 2011.Google Scholar
Prinz, D., Gauge symmetries and renormalization. Math. Phys., Anal. Geom. 25(2022), Article no. 20.CrossRefGoogle Scholar
Shapiro, L. W., Getu, S., Woan, W.-J., and Woodson, L. C., The Riordan group. Discrete Appl. Math. 34(1991), 229239.10.1016/0166-218X(91)90088-ECrossRefGoogle Scholar
Stanley, R. P., Enumerative combinatorics, volume 2 of Cambridge Studies in Advanced Mathematics, Cambridge University Press, Cambridge, 1999.Google Scholar
Swanson, E., A primer on functional methods and the Schwinger-Dyson equations. AIP Conf. Proc. 1296(2010), 8.Google Scholar
van Suijlekom, W. D., Renormalization of gauge fields using Hopf algebras . In Fauser, B., Tolksdorf, J., and Zeidler, E. (eds.), Quantum field theory, Springer, Basel, 2009, pp. 135154.Google Scholar
Ward, B. C., Massey products for graph homology. Int. Math. Res. Notices 2022(2021), 80868161.10.1093/imrn/rnaa346CrossRefGoogle Scholar
Yeats, K., Growth estimates for Dyson-Schwinger equations. PhD thesis, Boston University, 2008.Google Scholar
Yeats, K., A combinatorial perspective on quantum field theory, Number 15 in Springer Briefs in Mathematical Physics, Springer, Cham, 2017.10.1007/978-3-319-47551-6CrossRefGoogle Scholar
Figure 0

Figure 1 Examples of binary tubings. Upper and lower tubes highlighted in different colours.

Figure 1

Figure 2 An upper tube and its corresponding lower tube. The type of the upper tube is the decoration of the highlighted edge.