We present a general construction of two types of differential forms, based on any

Article funded by SCOAP3

0 \quad | \quad 2\leq i,j \leq n{-}1\}\,, \end{align} and the intersection gives $P(C^{\rm S}_n)$ as the permutohedron polytope~\cite{Arkani-Hamed-ml-2017mur}. We will prove the claim for general Cayley polytope in~\ref{appb} by studying the geometric factorization: any codimension-$k$ boundary of $P(C_n)$ corresponds to a set of $k$ compatible poles $s_{A_1},\ldots ,s_{A_k}$ where each $A_i$ is a connected subset in $C_n$. Moreover, the canonical form of $P(C_n)$ coincides with the pullback of $\Omega_{H(C_n)}$ on $H(C_n)$, which naturally follows from our construction. \paragraph{\boldmath Regions in ${\cal M}_{0,n}$ and relations to graph associahedron.} We have seen that a convex polytope can be constructed beautifully in the subspace of ${\cal K}_n$ for each Cayley case, now we show how to get the same combinatoric polytope as a region in ${\cal M}_{0,n}$. The Cayley worldsheet form, $\omega_{H(C_n)}$ is the canonical form of the region, which can be pushed forward to yield the canonical form of Cayley polytope. The region can be understood as the union of ${\cal M}_{0,n}^+$ with different orderings in a natural way, following the results of~\cite{Gao-ml-2017dek}. To do this, we need to regard the spanning tree as a directed graph, which also fix the sign convention for $J_{H(C_n)}$. We pick e.g.\ $n$ as the root and define $C_n$ as a directed graph with all arrows pointing towards $n$. Now the sign convention in $J_{H(C_n)}$ (which we have not been careful about) is that we have $\sigma_j-\sigma_i$ for every edge from $i$ to $j$. Interestingly, we have a nice region that goes with this directed graph: \begin{equation}\label{cayre} R(C_n):=\bigcup_{\substack{\pi \in S_{n{-}2}\\ \pi^{-1} (i)<\pi^{-1}(j)}}{\cal M}_{0,n}^+(1, \pi(2), \cdots, \pi(n{-}1), n)\,. \end{equation} It is the union of associahedra with orderings $(1, \pi(2), \cdots, \pi(n{-}1), n)$ such that $i$ precedes $j$ in $\pi$ for each directed edge from $i$ to $j$. For instance, $R(C_{n}^{\mathrm{H}})$ is just the positive part $\mathcal{M}_{0,n}^{+}(1,2,\cdots,n)$ since the directed edges in Hamilton graph are those form $2$ to $3$, $3$ to $4$, and so on. Another example is $C^{\rm S}_n$, all $\pi \in S_{n{-}2}$ contribute in this case since the only directed edges are those from $i$ to $n$ for any $i$, thus $R(C^{\rm S}_n)$ is the union of $(n{-}2)!$ associahedra. It is the following non-trivial identity derived in~\cite{Gao-ml-2017dek} that guarantees that the canonical form of $R(C_n)$ is given by the worldsheet form $\omega_{H(C_n)}$: \begin{equation}\label{416eq} J(C_n)=\sum_{\substack{\pi \in S_{n{-}2}\\ \pi^{-1} (i)<\pi^{-1}(j)} } {\rm PT}(1, \pi(2), \cdots, \pi(n{-}1), n)\,. \end{equation} Of course we can choose another label as root which will result in a different region, but they all have the same canonical form (up to a possible sign). In general these regions do not look like a convex polytope in ${\cal M}_{0,n}$, but $R(C_n)$ has exactly the same boundary structure as the corresponding Cayley polytope, $P(C_n)$. For example, the boundaries of $R(C^{\rm S}_n)$ is exactly those of the permutohedron $P(C^{\rm S}_n)$. One can show this by noting that any co-dimension 1 boundary of $R(C_n)$ corresponds to a subset of $\sigma_i$ for $i\in I$ pinching together where $I$ induces a connected subgraph in $C_n$, and so on. Furthermore, one can show that the scattering-equation map~\eqref{mapCayley} maps all boundaries of $R(C_n)$ to corresponding boundaries of $P(C_n)$. In particular, it is obvious from~\eqref{mapCayley} that the boundaries of $R(C_n)$ with $\sigma_i \to \sigma_j$ are mapped to those of $P(C_n)$ with $s_{i,j}\to 0$. However, unlike the associahedron case, for any $R(C_n)$ that consists of more than one associahedron, its interior is not mapped to the interior of $P(C_n)$ (let alone any one-to-one map). We expect that instead the image of the $R(C_n)$ is the exterior, or the complement of $P(C_n)$ in the subspace $H(C_n)$. This of course explains why the form obtained from pushforward of $\omega_{H(C_n)}$ gives $\Omega_{H(C_n)}$, which is the canonical form for the ``exterior'' as well! Last but not least, the combinatoric polytopes for $R(C_n)$ are special cases of the so-called graph associahedra~\cite{MichaelCoxeter}, which are natural generalizations of associahedron and play an important role in Coxeter complexes etc. To see this, consider a graph $\Gamma(C_n)$ with $n{-}2$ vertices one for each edge $(i,j)$ of $C_n$, and two vertices are connected iff they are adjacent in $C_n$ (i.e.\ they share a vertex). For example, $\Gamma(C_n^{\rm H})$ is a Hamilton graph and $\Gamma(C_n^{\rm S})$ a complete graph, with $n{-}2$ vertices. Our $R(C_n)$ and $P(C_n)$ are combinatorially the same polytope as the graph associahedron obtained from $\Gamma(C_n)$. On the other hand, there are of course graphs that cannot be obtained from a spanning trees in this way. For example, we have seen that in rewriting the scattering equations, we encounter disconnected graphs that correspond to degenerate $H(C_n)$. They still give perfectly well-defined $\Gamma(C_n)$ and graph associahedra (for example, the cylcohedron for $n>5$ belong to this case), but there is no Cayley polytope for such cases. Thus our construction singles out a special class of graph associahedra that have a nice realization in kinematic space and scattering-equation~maps. ]]>

=0pt,draw=black,scale=.7, node distance = .65cm, neuron/.style = {circle, minimum size=3pt, inner sep=0pt, fill=black } ] \node[neuron] (1) {}; \node[ neuron,right of = 1] (2) {}; \node[ neuron,above of = 2] (3) {}; \node[ neuron,right of = 2,xshift=1.5cm] (4) {}; \node[right of = 4] (8) {}; \node[ neuron,right of = 8] (9) {}; \node[ neuron,above of = 4] (5) {}; \node[ neuron,above of = 5] (6) {}; \node[ neuron,right of = 5] (7) {}; \node at ($(9)+2*(9)-2*(8)$) {~}; \node at ($(1)-2*(9)+2*(8)$) {~}; \draw (1)--(2)--(3); \draw (6)--(4)--(9); \draw (5)--(7); \draw (2)node[below=0pt,xshift=1.05cm]{$s_{I}$}--(4); \draw[blue,dashed] (0.5,0.5) ellipse (.8 and 1.2); \node[blue] at (.5,1.1) {$L$}; \draw[blue,dashed] (4.9,1) ellipse (1.6 and 2.1); \node[blue] at (5.0,2.1) {${R}$}; \end{tikzpicture} } \subfloat[\label{ISwithn}]{ \begin{tikzpicture}[shorten >=0pt,draw=black,scale=0.7, node distance = .65cm, neuron/.style = {circle, minimum size=3pt, inner sep=0pt, fill=black } ] \node[neuron] (1) {}; \node[ neuron,right of = 1] (2) {}; \node[ neuron,above of = 2] (3) {}; \node[ neuron,right of = 2] (10) {}; \node[neuron, above of= 10,yshift=0.3cm] (11) {}; \node[neuron,right of = 10,xshift=0.5cm] (4) {}; \node[right of = 4] (8) {}; \node[neuron, right of = 8] (9) {}; \node[ neuron,above of = 4] (5) {}; \node[ neuron,above of = 5] (6) {}; \node[ neuron,right of = 5] (7) {}; \node at ($(9)+2*(9)-2*(8)$) {~}; \node at ($(1)-2*(9)+2*(8)$) {~}; \draw (1)--(2)--(3); \draw (6)--(4)--(9); \draw (5)--(7); \draw (10)--(11)node[above=0pt]{$n$}; \draw (2)node[below=0pt,xshift=0.45cm]{$s_L$}--(10); \draw (10)node[below=0pt,xshift=0.55cm]{$s_R$}--(4); \draw[blue,dashed] (0.5,0.5) ellipse (.8 and 1.2); \node[blue] at (.5,1.1) {$L$}; \draw[blue,dashed] (4.5,1) ellipse (1.6 and 2.1); \node[blue] at (4.4,2.1) {${R}$}; \end{tikzpicture} } ]]>

&\<624\> & \<423\> \\
\<531\> &\<435\> & \<132\> \\
\<645\> &\<342\> & \<246\>
\end{array}
\right| \,,
\end{equation}
where the abbreviation $\

&\<624\> & \<426\>& \<623\> \\
\<531\> &\<435\> & 0 & \<132\> \\
\<645\> &\<342\> & \<241\> & \<746\>\\
\<354\>& \<753\>& \<657\>& 0
\end{array}
\right|}
{(167)} =
\frac{ {\rm det}^2\left|
\begin{array}{ccccc}
\s_{23} &\s_{31} &\s_{12}& 0 & 0\\
0 & 0 & \s_{45}&\s_{53}& \s_{34}\\
\s_{56} & 0 & 0 & 0 & \s_{61}\\
0 &\s_{46}& 0 &\s_{62} & 0\\
\s_{47}& 0 & 0 &\s_{71} &0
\end{array}
\right| }
{\s_{67}^2(123)(345)(561)(246)(147)}\,.
\end{equation}
Now we are ready to present the general proposition for the hyperplane that corresponds to any irreducible LS cases:
\begin{prop*}
One can choose the triplets for any irreducible LS $(\{i,j,k\})$ such that each label appears in more than one triplet, and each pair of labels appears at most in one triplet. There are exactly $3(n{-}2)$ edges from the $n{-}2$ triangles, thus we need $n(n{-}1)/2-3(n{-}2)=(n{-}3)(n{-}4)/2$ dashed lines $\{(a,b)\}$ to make a complete graph. After setting the corresponding $s_{a,b}$'s to constant, we further choose \emph{any} $n{-}3$ of the $n{-}2$ $s_{i,j,k}$'s to be constant (which implies the last one is also constant). This is our proposal for the $d\log$ hyperplane corresponding to any irreducible LS function.
\end{prop*}
Finally, to get a general LS hyperplane, we first find the hyperplane for its irreducible part, then proceed by IS construction~\eqref{IScondition}.
For example, the subspace of the IS-reducible function $\mathbf{LS}(\{(1,2,3),(3,4,5),(5,6,1)$ $,(2,4,6),\underline{(2,4,7)}\}) $ is
\begin{equation}
H_7=\{
s_{14}, s_{25}, s_{36}, s_{123}, s_{345}, s_{561}, \underline{s_{17},s_{37},s_{57},s_{67}} \text{~are~constants}\}\,.
\end{equation}
Unlike the IS-constructible LS functions, it is not obvious how to
directly prove our proposition, i.e.\ to show that our subspaces yield
the correct $d\log$ forms for general LS\@. In the following, we use a
different strategy: given a LS function or any $d\log$ form on
worldsheet, we will present a algorithm for finding a class of
subspaces that yield such a LS function/$d\log$ form. We will see the
answer to this question also further provides some insights into the
relation between subspaces, leading singularities and general $d\log$
forms.
\paragraph{Constructing subspaces for general dlog forms.}
By the pullback to some hyperplane $H$, we can obtain a $(n-3)\times n$ matrix from the scattering equations. However, in the context of leading function, the starting point is a $(n-2)\times n$ matrix. To prove that they have the same reduced determinant, it is better to state this proposition in terms of differential form. Since the leading singularity functions are originally defined in the $\lambda$-space, we introduce here a $(2n{-}4)$-differential form for any LS function $\mathcal{LS}(T_n)$
\begin{equation}
\Omega_{\mathrm{LS}}(T_n) = \mathcal{LS}(T_n)\frac{d^{2n}\lambda}{\mathrm{GL}(2)} \:, \label{lsform}
\end{equation}
and it can be easily rewritten as
\begin{equation}
\Omega_{\mathrm{LS}}(T_n)= \bigwedge_{\tau\in T_n}
d\log\frac{\langle\tau_{1}\tau_{2}\rangle}{\langle\tau_{3}\tau_{1}\rangle}\wedge d\log\frac{\langle\tau_{2}\tau_{3}\rangle}{\langle\tau_{3}\tau_{1}\rangle} \:, \label{lsform2}
\end{equation}
as shown in~\cite{Arkani-Hamed-ml-2014bca}. Now if we make a variable substitution $\lambda_{i}=t_{i}(1,\sigma_{i})$ like above and decompose the $\mathrm{GL}(2)$ redundancy to $\mathrm{SL}(2)$ for $\sigma$'s and $\mathrm{GL}(1)$ for $t$'s, then this $(2n-4)$-form will decompose into a $(n-3)$-form for $\sigma$'s and a $(n-1)$-form for $t$'s. This decomposition is quite trivial in~(\ref{lsform}). It is simply
\begin{equation*}
\mathcal{LS}(T_n)\frac{d^{2n}\lambda}{\mathrm{GL}(2)}=\mathbf{LS}(T_n)\frac{d^n\sigma}{\mathrm{SL}(2)} \frac{d^{n}\log t}{\mathrm{GL}(1)} \:.
\end{equation*}
However such decomposition will lead to a nontrivial $(n-3)\times n$ matrix representation for LS function if we do it in~(\ref{lsform2}).
This decomposition is quite straightforward even in~(\ref{lsform2}). Since there is a trivial identity $d\log x\wedge d\log y=d\log (xy)\wedge d\log y= d\log (x/y)\wedge d\log y$, the $t$ factor in the first $d\log$ factor can be canceled by multiplying or dividing the terms in the other $d\log$ factors. This procedure can be proceeded until $n-3$ such $d\log$ appear, and the differential form consisting of these $n-3$ $d\log$'s is desired since it is already the top form on the $\sigma$-space. There is an example for the triple set $\{(1,2,5),(1,3,5),(1,4,5)\}$:
\begin{align}
&\quad d\log\frac{\langle 1\,2 \rangle}{\langle 5\,1\rangle}d\log\frac{\langle 2\,5 \rangle}{\langle 5\,1\rangle}d\log\frac{\langle 1\,3 \rangle}{\langle 5\,1\rangle}d\log\frac{\langle 3\,5 \rangle}{\langle 5\,1\rangle}d\log\frac{\langle 1\,4 \rangle}{\langle 5\,1\rangle}d\log\frac{\langle 4\,5 \rangle}{\langle 5\,1\rangle} \nonumber \\
&=d\log\frac{t_{2}}{t_{5}}\frac{\sigma_{12}}{\sigma_{51}}d\log\frac{t_{2}}{t_{1}}\frac{\sigma_{25}}{\sigma_{51}}d\log\frac{t_{3}}{t_{5}}\frac{\sigma_{13}}{\sigma_{51}}d\log\frac{t_{3}}{t_{1}}\frac{\sigma_{35}}{\sigma_{51}}d\log\frac{t_{4}}{t_{5}}\frac{\sigma_{14}}{\sigma_{51}}d\log\frac{t_{4}}{t_{1}}\frac{\sigma_{45}}{\sigma_{51}} \nonumber \\
&=d\log\frac{\sigma_{12}\sigma_{35}}{\sigma_{25}\sigma_{13}}d\log\frac{\sigma_{13}\sigma_{45}}{\sigma_{14}\sigma_{35}}d\log\frac{t_{1}}{t_{2}}\frac{\sigma_{51}}{\sigma_{25}}d\log\frac{t_{3}}{t_{1}}\frac{\sigma_{35}}{\sigma_{51}}d\log\frac{t_{4}}{t_{5}}\frac{\sigma_{14}}{\sigma_{51}}d\log\frac{t_{4}}{t_{1}}\frac{\sigma_{45}}{\sigma_{51}} \nonumber \\
&=d\log\frac{\sigma_{12}\sigma_{35}}{\sigma_{25}\sigma_{13}}d\log\frac{\sigma_{13}\sigma_{45}}{\sigma_{14}\sigma_{35}}d\log\frac{t_{1}}{t_{2}}d\log\frac{t_{3}}{t_{1}}d\log\frac{t_{4}}{t_{5}}d\log\frac{t_{4}}{t_{1}}\:,
\end{align}
where we have omitted the wedge product symbols for saving space. It is obvious that the variables of the $(n-3)$-form on the $\sigma$-space are some cross-ratios of $\sigma$'s. Actually, they are face variables of on-shell digram (see~\cite{ArkaniHamed-ml-2012nw}). In the following we will denote these variables as $f$'s and the corresponding $d\log$ factors as $d\log f$'s.
Thus we obtain another $(n-3)\times n$ matrix from those face variable by taking the partial derivative with respect to $\sigma_{i}$. It is obvious that the reduced determinant of this matrix gives the corresponding LS function. For example, three $f$'s for the triple set $\{(1,2,3),(3,4,5),(5,6,1),(2,4,6)\})$ can be chosen as $f_{1}=(\sigma_{13}\sigma_{26}\sigma_{45})/(\sigma_{12}\sigma_{46}\sigma_{35})$, $ f_{2}=(\sigma_{15}\sigma_{34}\sigma_{26})/(\sigma_{16}\sigma_{35}\sigma_{24})$ and $ f_{3}=(\sigma_{13}\sigma_{24}\sigma_{56})/(\sigma_{15}\sigma_{23}\sigma_{46})$, and the corresponding derivative matrix is
\begin{equation}
\left(\frac{\partial \log f_{\alpha}}{\partial \sigma_{a}}\right)_{\alpha a} =
\begin{pmatrix}
\langle 213\rangle & \langle 126\rangle & \langle 531\rangle & \langle645\rangle & \langle 354\rangle & \langle 462\rangle \\
\langle516\rangle & \langle624\rangle & \langle435\rangle & \langle342\rangle & \langle153\rangle & \langle261\rangle \\
\langle315\rangle & \langle423\rangle & \langle132\rangle & \langle246\rangle & \langle651\rangle & \langle564\rangle
\end{pmatrix} \:. \label{2.1}
\end{equation}
Remarkably, this matrix is the same as the matrix
\begin{equation}
\left( \frac{\partial E_{a}}{\partial X_{\alpha}}\right)\bigg\rvert_{H}\,,
\end{equation}
where $X_{i}=\{s_{12},s_{34},s_{56}\}$ and the subspace $H$ is~(\ref{6ptcon}). Then the reduced determinant of course gives the desired LS function.
For the general case, we know the scattering equation $E_{a}$ comes from the derivative of Koba-Nielsen factor $\mathcal{I}_{n}=\prod_{i