# No short polynomials vanish on bounded rank matrices

## Abstract

We show that the shortest non-zero polynomials vanishing on bounded-rank matrices and skew-symmetric matrices are the determinants and Pfaffians characterising the rank. Algebraically, this means that in the ideal generated by all $$t$$-minors or $$t$$-Pfaffians of a generic matrix or skew-symmetric matrix, one cannot find any polynomial with fewer terms than those determinants or Pfaffians, respectively, and that those determinants and Pfaffians are essentially the only polynomials in the ideal with that many terms. As a key tool of independent interest, we show that the ideal of a very general $$t$$-dimensional subspace of an affine $$n$$-space does not contain polynomials with fewer than $$t+1$$ terms.

## 1 INTRODUCTION

In many areas of computational mathematics, sparsity is an essential feature used for complexity reduction. Sparse mathematical objects often allow more compact data structures and more efficient algorithms. We are interested in sparsity as a complexity measure for polynomials, where, working in the monomial basis, it means having few terms. This augments the usual degree-based complexity measures such as the Castelnuovo–Mumford regularity.

Sparsity-based complexity applies to geometric objects, as well. If $$X\subset K^{n}$$ is a subset of affine $$K$$-space, one can ask for the shortest polynomial that vanishes on $$X$$. A *monomial* vanishes on $$X$$ if and only if $$X$$ is contained in the union of the coordinate hyperplanes. That $$X$$ is cut out by *binomials* can be characterised geometrically using the log-linear geometry of binomial varieties [4, Theorem 4.1]. Algorithmic tests for single binomials vanishing on $$X$$ are available both symbolically [9] and numerically [7]. We ask for the shortest polynomial vanishing on $$X$$, or algebraically, the shortest polynomial in an ideal of the polynomial ring. The shortest polynomials contained in (principal) ideals of a univariate polynomial ring have been considered in [6]. Computing the shortest polynomials of an ideal in a polynomial ring seems to be a hard problem with an arithmetic flavour. Consider Example 2 from [9]: For any positive integer $$n$$, let $$I_n=((x-z)^2,nx-y-(n-1)z)\subseteq {\mathbb {Q}}[x,y,z]$$. The ideals $$I_n$$ all have Castelnouvo–Mumford regularity 2 and are primary over $$(x-z,y-z)$$, so in a sense, they are all very similar. However, $$I_{n}$$ contains the binomial $$x^n-yz^{n-1}$$ and there is no binomial of degree less than $$n$$ in $$I_{n}$$. This means that the syzygies and also the primary decomposition carry no information about short polynomials. It is unknown to the authors if a Turing machine can decide if an ideal contains a polynomial with at most $$t$$ terms.

In this paper, we show that determinants are the shortest non-zero polynomials that vanish on the set of fixed-rank matrices and that, moreover, they are essentially the only shortest polynomials in the determinantal ideal (Theorem 3.1). A variant of the proof yields a similar result (Theorem 4.1) for skew-symmetric matrices, where Pfaffians, the square roots of determinants, are the shortest vanishing non-zero polynomials. Their number of terms is the double factorial $$(r+1)!! \coloneqq (r+1)(r-1)\cdots$$. Both proofs rely on Proposition 2.1, a bound for the number of terms of polynomials vanishing on very general linear spaces. In Section 7, we briefly discuss the case of bounded rank symmetric matrices, which, however, remains mostly open!

Our proofs have geometric aspects, and for these, it is convenient to work with algebraically closed fields. However, Theorems 3.1 and 4.1 immediately imply that the corresponding ideals over arbitrary fields contain no shorter polynomials than determinants and Pfaffians, respectively; see Corollaries 3.3 and 4.2. In the determinant case, this improves a lower bound of $$(r+1)!/2$$ terms established by the last two authors via purely algebraic methods [11].

### 1.1 Notation and conventions

In everything that follows there are fixed bases with respect to which any sparsity is considered. We use the standard basis of $$K^{n}$$ and the monomial basis for polynomials. We write $$K[x_1,\ldots ,x_n]_d$$ for the space of homogeneous polynomials of degree $$d$$ in the variables $$x_1,\ldots ,x_n$$ with coefficients from the field $$K$$. Except in Corollaries 3.3 and 4.2, we assume that $$K$$ is algebraically closed. The characteristic of $$K$$ is arbitrary.

## 2 NO SHORT POLYNOMIALS VANISH ON VERY GENERAL SUBSPACES

If $$X$$ is an irreducible algebraic variety over $$K$$, we say that a *sufficiently general* $$x \in X$$ has a certain property if there exists a Zariski open and dense subset $$Y\subset X$$ such that all $$x \in Y$$ have that property. The open and dense subset $$Y$$ is typically not made explicit, and may moreover shrink finitely many times in the course of a proof as further assumptions are imposed on $$x$$. This notion of genericity is common in algebraic geometry.

Another common notion from algebraic geometry that we will need is the following. We say that a *very general* $$x \in X$$ has a certain property if there is a countable collection of proper, Zariski-closed subsets of $$X$$, defined over $$K$$, such that any $$x$$ outside their union satisfies the property. If the ground field $$K$$ is too small, then such very general $$x$$ may exist only over a field extension of $$K$$. This is no problem in our application to varieties of bounded-rank matrices and skew-symmetric matrices, where, to prove our results, we may always extend the field as desired. However, in our result on linear spaces, we will require that the space be very general.

Indeed, we consider properties of a sufficiently or even very general $$r$$-dimensional linear subspace $$U\subset K^{n}$$. In this case, $$X$$ is understood to be the Grassmannian $$\operatorname{Gr}\nolimits _r(K^n)$$, and $$U$$ is called sufficiently general if the point in $$\operatorname{Gr}\nolimits _r(K^n)$$ representing it is sufficiently general.

For example, when $$U\in \operatorname{Gr}\nolimits _r(K^n)$$ is sufficiently general, any $$r$$ coordinates are linearly independent on $$U$$, and hence, the shortest linear polynomials vanishing on $$U$$ have $$r+1$$ terms. For instance, $$c_1 x_1 + \cdots + c_{r+1} x_{r+1}=0$$ holds on $$U$$ for certain non-zero $$c_1,\ldots ,c_{r+1} \in K$$. Multiplying such $$(r+1)$$-term linear polynomials by monomials, or, if $$K$$ has positive characteristic $$p>0$$, raising them to $$p^{e}$$th powers yields short polynomials of higher degree also vanishing on $$U$$. A key step in our argument is to show that these are all shortest polynomials vanishing on $$U$$, at least for very general $$U$$.

To formulate and prove our results in a characteristic independent manner, let $$p$$ be the characteristic exponent of $$K$$, that is, $$p\coloneq 1$$ if $$\operatorname{char}K=0$$ and $$p\coloneq \operatorname{char}K$$ otherwise.

Proposition 2.1.Let $$n\geqslant r$$ and $$d$$ be non-negative integers and let $$U$$ be a very general $$r$$-dimensional subspace of $$K^n$$. Then a non-zero polynomial $$f \in K[x_1,\ldots ,x_n]$$ that vanishes identically on $$U$$ has at least $$r+1$$ terms. If $$r \ne 1$$, then equality holds if and only if $$f$$ has the form $$u \cdot ((c_{1} x_{i_1})^{p^e} + \cdots + (c_{r+1} x_{i_{r+1}})^{p^e})$$ for some monomial $$u$$, distinct indices $$i_1<\ldots <i_{r+1}$$, non-negative integer $$e$$ and $$\sum _j c_j x_{i_j}$$ a linear form that vanishes on $$U$$.

Remark 2.2.If $$p=1$$ or $$e=0$$, the second factor is just a linear form. Furthermore, the requirement that $$r \ne 1$$ is necessary for the characterisation of the shortest polynomials. Indeed, if $$r=1$$, then some linear form $$c_1 x_1 + c_2 x_2$$ vanishes on $$U$$, and then so does the binomial $$c_1^2 x_1^2 - c_2^2 x_2^2$$, which is not of the shape in the proposition. If $$r=1$$, then the *1*-dimensional torus $$K^*$$ acts, via scaling, on $$U$$ with a dense orbit, and thus, the ideal of $$U$$ is a binomial ideal. Binomial ideals are linearly spanned by the binomials they contain, which shows that they contain many binomials.

Remark 2.3.We do not know whether *very general* in Proposition 2.1 can be replaced by *sufficiently general*. In our proof below, we require that $$U$$ avoids countably many Zariski-closed subsets of the Grassmannian.

*reverse lexicographic order*on the space $$K[x_1,\ldots ,x_n]_d$$ is defined by $$x^\alpha > x^\beta$$ if for the

*largest*$$j$$ with $$\alpha _j \ne \beta _j$$, we have $$\alpha _j<\beta _j$$. Thus, the monomial basis of this space, in decreasing order, is

*generic initial space*$$\operatorname{gin}(V)$$ of a subspace $$V \subseteq K[x_1,\ldots ,x_n]_d$$ is the space spanned by the leading monomials of elements of $$g V$$, which for a sufficiently general element of $$g\in \operatorname{GL}\nolimits _n$$ does not depend on $$g$$. This space has two important properties. Firstly, it is in the closure of the $$\operatorname{GL}\nolimits _n$$-orbit of $$V$$ in $$\operatorname{Gr}\nolimits _r(K^n)$$, and secondly, it is stable under the Borel subgroup of $$\operatorname{GL}\nolimits _n$$ that stabilises the chain of subspaces

Lemma 2.4.Let $$d \in {\mathbb {Z}}_{\geqslant 1}$$. Suppose that a linear space $$V \subseteq K[x_1,\ldots ,x_n]_d$$ has $$\operatorname{gin}(V)=x_1^{d-p^e} \cdot \langle x_1^{p^e},\ldots ,x_{s}^{p^e} \rangle$$ for some $$s$$ with $$3 \leqslant s \leqslant n$$ and some $$e \in {\mathbb {Z}}_{\geqslant 0}$$. Then $$V=f \cdot \langle \ell _1^{p^e},\ldots ,\ell _s^{p^e} \rangle$$ for some $$f \in K[x_1,\ldots ,x_n]_{d-p^e}$$ and linear forms $$\ell _1,\ldots ,\ell _s \in K[x_1,\ldots ,x_n]_1$$.

In characteristic zero, this is a special case of [5, Main Theorem]. Our proof follows the strategy of the proof there, but replaces algebraic arguments involving differentiation by geometric arguments that suffice in our setting.

Proof.The proof can be split as follows. If $$d=p^e$$, then $$V$$ consists of $$d$$-th powers of linear forms; while if $$d>p^e$$, then it suffices to show that $$V=\tilde{f} \cdot \tilde{V}$$ for some homogeneous $$\tilde{f}$$ of positive degree $$d-\tilde{d}>0$$. In this case, $$\operatorname{gin}(\tilde{V})=x_1^{\tilde{d}-p^e} \cdot \langle x_1^{p^e},\ldots ,x_s^{p^e} \rangle$$ and the argument applies to $$\tilde{V}$$. If $$d=1$$, then the first statement obviously holds, so we may assume that $$d>1$$.

We prove both statements first for $$s=n$$. For a sufficiently general $$g \in \operatorname{GL}\nolimits _n$$, the space $$g V$$ contains a polynomial $$f$$ with leading monomial $$x_1^{d-p^e} x_n^{p^e}$$. By definition of the reverse lexicographic order, $$f$$ is divisible by $$x_n^{p^e}$$. Consequently, $$V$$ itself contains a non-zero polynomial divisible by $$g^{-1} x_n^{p^e}$$, namely $$g^{-1} f$$. Since this holds for any sufficiently general $$g$$, $$V$$ contains a non-zero multiple of the $$p^e$$th power of any sufficiently general linear form.

Let $$L\coloneq K[x_1,\ldots ,x_n]_1$$ be the space of linear forms. Consider the incidence variety

This implies two things: Firstly, any sufficiently general fibre of $$Z \rightarrow {\mathbb {P}}L$$ has dimension zero—and since these fibres are projective linear spaces, a sufficiently general fibre is a single point. And second, $$Z \rightarrow {\mathbb {P}}V$$ is surjective, so any element of $$V$$ is divisible by some $$p^e$$th power of a linear form. If $$d=p^e$$, then we are done, so we may henceforth assume that $$d>p^e$$.

Now fix a basis $$f_1,\ldots ,f_n$$ of $$V$$ and let $$X \subseteq {\mathbb {P}}^{n-1}$$ be the affine open subset where $$f_1 \ne 0$$. Consider the morphism

Now assume $$n>s \geqslant 3$$. For any sufficiently general $$g \in \operatorname{GL}\nolimits _n$$, let $$\tilde{V} \subseteq K[x_1,\ldots ,x_s]$$ be the space obtained from $$gV$$ by setting the variables $$x_{s+1},\ldots ,x_n$$ to zero. Then $$\operatorname{gin}(\tilde{V})=x_1^{d-p^e} \cdot \langle x_1^{p^e},\ldots ,x_s^{p^e} \rangle$$ and hence, by the above, $$\tilde{V}=\tilde{f} \cdot \langle x_1^{p^e},\ldots ,x_s^{p^e} \rangle$$ for a non-zero homogeneous polynomial $$\tilde{f}$$.

We again distinguish two cases. If $$d=p^e$$ and some non-zero polynomial $$f$$ in $$V$$ is not a linear combination of $$d$$th powers of variables, then $$f$$ is not an additive polynomial, and hence not additive on sufficiently general $$s$$-dimensional subspaces of $$K^n$$ (here we only need that $$s \geqslant 2$$). This implies that $$gf$$ with the last $$n-s$$ variables set to zero is not a linear combination of $$d$$th powers of variables, contradicting the previous paragraph.

Now assume that $$d>p^e$$ and let $$Y \subseteq {\mathbb {P}}^{n-1}$$ be the variety defined by the polynomials in $$V$$. Then the penultimate paragraph implies that the intersection of $$Y$$ with a sufficiently general codimension-$$(n-s)$$ subspace contains a hypersurface in $${\mathbb {P}}^{s-1}$$ (defined by $$g^{-1}\tilde{f}$$, where $$g \in \operatorname{GL}\nolimits _n$$ maps the linear equations for the subspace to $$x_{s+1},\ldots ,x_{n}$$). But then $$Y$$ must itself have a component of dimension $$n-2$$, that is, a hypersurface. This shows that the elements in $$V$$ have a non-trivial gcd, and we are done.$$\Box$$

Proof of Proposition 2.1.Let $$U \subseteq K^n$$ be a very general $$r$$-dimensional subspace with $$r \geqslant 2$$. We want to show that polynomials vanishing on $$U$$ have at least $$r+1$$ terms, and characterise those where equality holds. The requirement that $$U$$ be *very* general comes from the fact that we have to exclude equations for $$U$$ with fewer than $$r+1$$ terms of *varying degrees*. In each fixed degree, *sufficiently* general suffices.

**Part 1: Proof of the lower bound** $$r+1$$. If some polynomial $$f$$ vanishes on $$U$$, then every homogeneous component of $$f$$ vanishes on $$U$$, so we may assume that $$f$$ is homogeneous of some degree $$d$$. Consider a space $$V$$ spanned by $$N$$ distinct degree-$$d$$ monomials $$x^{\alpha _i}, i=1,\ldots ,N$$ in $$n$$ variables. The set of $$U \in \operatorname{Gr}\nolimits _r(K^n)$$ for which there exists a point $$[f_1:\ldots :f_N] \in {\mathbb {P}}(K^N)$$ with $$\sum _i f_i x^{\alpha _i}$$ identically zero on $$U$$ is a closed subset of the Grassmannian $$\operatorname{Gr}\nolimits _r(K^n)$$. Since, for a fixed $$d$$, there are only finitely many subsets of the set of degree-$$d$$ monomials, we may assume that $$U$$ lies outside all of these closed subsets that are not the entire Grassmannian. It follows, then, that if such a point $$[f_1:\ldots :f_N]$$ *does* exist for $$U$$, then such a point exists for *every* $$r$$-dimensional subspace of $$K^n$$. We assume that this is the case and bound $$N$$ from below.

Write $$F\coloneq K[x_1,\ldots ,x_n]_d$$ and consider the incidence variety

By construction, $$C$$ is a $$\operatorname{GL}\nolimits _n$$-stable closed subset of the projective variety $$\operatorname{Gr}\nolimits _N(F)$$, and hence, by Borel's fixed point theorem [2, Theorem 10.4], $$C$$ contains a point $$W$$ that is stable under the Borel subgroup $$B \subseteq \operatorname{GL}\nolimits _n$$ that stabilises the flag

By construction, on every $$r$$-dimensional subspace of $$K^n$$ some non-zero element of $$W$$ vanishes identically. Since on the space $$K^r \times \lbrace 0\rbrace ^{n-r}$$ no non-zero polynomial in the first $$r$$ variables vanishes, $$W$$ contains a monomial $$x^\beta$$ with $$\beta _s>0$$ for some $$s>r$$. Writing $$\beta _s=p^e m$$ as above, we find that $$W$$ also contains the $$s-1$$ monomials $$x^{\beta -p^e e_s + p^e e_i}$$ with $$i=1,\ldots ,s-1$$. Hence $$W$$ has dimension at least $$s \geqslant r+1$$, as desired.

**Part 2: Proof of the characterisation**. If $$\dim W=r+1$$ holds, then the previous paragraph shows that $$s=r+1$$, and that we have already listed all monomials in $$W$$. By similar arguments, we find that $$W=x_1^{d-p^e} \cdot \langle x_1^{p^e},\ldots ,x_s^{p^e} \rangle$$. We have thus established that every $$B$$-stable element in the $$\operatorname{GL}\nolimits _n$$-orbit closure of our original space $$V$$ is this particular space $$W$$. This applies, in particular, to $$W=\operatorname{gin}(V)$$. But then, by Lemma 2.4, $$V=f \cdot \langle \ell _1^{p^e}, \ldots , \ell _s^{p^e} \rangle$$ for some polynomial $$f$$ of degree $$d-p^e$$ and some linear forms $$\ell _1,\ldots ,\ell _s$$. Finally, since $$V$$ is spanned by monomials, $$f$$ is a monomial and the $$\ell _i$$ can be taken to be variables. This proves the proposition.$$\Box$$

## 3 NO SHORT POLYNOMIALS VANISH ON BOUNDED-RANK MATRICES

Using Proposition 2.1 inductively, we can characterise the shortest polynomials vanishing on fixed rank matrices.

Theorem 3.1.Let $$m,n,r$$ be natural numbers with $$m,n \geqslant r$$. Then there exists no non-zero polynomial with fewer than $$(r+1)!$$ terms that vanishes on all rank-$$r$$ matrices in $$K^{m \times n}$$. Moreover, if $$r \geqslant 2$$, then every polynomial with exactly $$(r+1)!$$ terms that vanishes on all such matrices is a term times the $$p^e$$th power of some $$(r+1)$$-minor, for some non-negative integer $$e$$.

Remark 3.2.As in Proposition 2.1, and for the same reason, the case $$r=1$$ needs to be excluded in the second part of the theorem. Indeed, the variety of rank-1 matrices has a dense $$(K^*)^m \times (K^*)^n$$-orbit, and hence, its ideal is spanned by binomials. Most of these binomials are not of the form in the theorem. However, we know exactly what they are, namely *(*scalar multiples of*)* $$x^\alpha - x^\beta$$ where the $$m \times n$$-exponent matrices $$\alpha$$ and $$\beta$$ satisfy $$\sum _j \alpha _{ij}=\sum _j \beta _{ij}$$ for all $$i$$ and $$\sum _i \alpha _{ij}=\sum _i \beta _{ij}$$ for all $$j$$, and where $$x^\alpha$$ is short hand for $$\prod _{i,j} x_{ij}^{\alpha _{ij}}$$. The proof of Theorem 3.1 proceeds by induction on $$r$$, and for the second part, we start with $$r=2$$, where this characterisation of binomials vanishing on rank-one matrices is used.

Before proceeding with the proof, we record a corollary over arbitrary fields.

Corollary 3.3.Let $$m,n,r$$ be as in Theorem 3.1, and let $$L$$ be an arbitrary field. Then the ideal $$I \subseteq L[x_{ij} \mid (i,j) \in [m] \times [n]]$$ generated by the $$(r+1)$$-minors of the matrix $$x = (x_{ij})$$ contains no non-zero polynomials with fewer than $$(r+1)!$$ terms, and the only polynomials in $$I$$ with precisely $$(r+1)!$$ terms are those described in Theorem 3.1.

Proof of the corollary.Let $$K$$ be an algebraic closure of $$L$$. Then any polynomial $$f$$ in $$I$$ vanishes on all matrices in $$K^{m \times n}$$ of rank at most $$r$$. Hence $$f$$ is of the form in Theorem 3.1.$$\Box$$

Remark 3.4.We do not know whether Corollary 3.3 still holds if one allows to do an arbitrary invertible linear change of the $$n^2$$ coordinates. We suspect that this cannot reduce the minimal number of monomials in a non-zero polynomial in the ideal $$I$$.

Proof of Theorem 3.1.**Part 1: Proof of the lower bound** $$(r+1)!$$ We proceed by induction on $$r$$. For $$r=0$$, the statement is evidently true. Now we suppose that $$r \geqslant 1$$ and that the statement is true for $$r-1$$.

Let $$f$$ be a non-zero polynomial that vanishes on all rank-$$r$$ matrices. Then $$m,n>r$$. Furthermore, since the matrices of rank at most $$r$$ form an affine cone, any homogeneous component of $$f$$ also vanishes on them; hence, we may assume that $$f$$ is homogeneous of positive degree.

Let $$x_m=(x_{m1},\ldots ,x_{mn})$$ be variables representing the last row of the matrix, and write

Each $$f_\alpha$$ vanishes on every rank-$$(r-1)$$ matrix of size $$(m-1) \times n$$. Indeed, if $$A$$ is such a matrix, then $$f(A,x_m)$$ is the zero polynomial because appending any $$m$$th row to $$A$$ yields a matrix of rank at most $$r$$, on which $$f$$ was assumed to vanish. By the induction assumption, each $$f_\alpha$$ has at least $$(r-1)!$$ terms.

On the other hand, since no $$f_{\alpha }$$ vanishes on all rank-$$r$$ matrices, for any very general $$(m-1) \times n$$-matrix $$A$$ of rank $$r$$, we have $$f_\alpha (A) \ne 0$$ for all $$\alpha \in S$$. Now $$f(A,x_m)$$ vanishes identically on the $$r$$-dimensional row space of $$A$$. We may further assume that the row space $$U \subseteq K^n$$ of $$A$$ is very general in the sense of Proposition 2.1. Then, by that proposition, $$f(A,x_m)$$ has at least $$r+1$$ terms, and hence, $$f$$ has at least $$(r+1) \cdot r!=(r+1)!$$ terms.

**Part 2: Proof of the characterisation**. Now assume that equality holds. Then by Proposition 2.1, $$f(A,x_m)$$ is a monomial times a linear combination of $$p^a$$th powers of variables, for some $$a \in {\mathbb {Z}}_{\geqslant 0}$$. After dividing by that monomial, it is just a linear combination of $$p^a$$th powers of variables. Furthermore, the same argument applies to *any* row or column of the matrix, so (after discarding rows and columns on which $$f$$ does not depend, and dividing by suitable monomials) $$f$$ is a linear combination of $$p^a$$th powers of the variables in *every* row/column and involves precisely $$r+1$$ of them. A priori, the exponents $$p^a$$ depend on the row/column, though if the entry on position $$(i,j)$$ appears in $$f$$, then the exponent $$p^a$$ for the $$i$$th row and that for the $$j$$th column are the same.

This leads us to consider a bipartite graph $$\Gamma$$ on $$[m] \sqcup [n]$$ with an edge $$(i,j)$$ if the variable $$x_{i,j}$$ appears in $$f$$. The graph $$\Gamma$$ is regular of degree $$r+1$$, and this implies that $$n=m$$. If $$x_{i,j}$$ appears in $$f$$, then it does so with exponent $$p^a$$, and we give the edge $$(i,j)$$ the label $$a$$. The edge labels are constant on connected components of $$\Gamma$$. Let $$M_i \sqcup N_i,\ i=1,\ldots ,q$$ be the vertex sets of those connected components. So, both the $$M_i$$ and the $$N_i$$ form partitions of $$[m]=[n]$$; the $$M_i$$ label rows, and the $$N_i$$ label columns. Regularity of the graph implies that $$|M_i|=|N_i|$$. After reordering row indices and column indices, we may assume that $$M_1,M_2,M_3,\ldots ,M_q$$ are consecutive intervals, and that $$N_i=M_i$$. Then $$f$$ depends only on the variables in the blocks of a block diagonal matrix with square diagonal blocks labelled by $$M_1 \times N_1, M_2 \times N_2, \ldots ,M_q \times N_q$$.

Let $$a_i$$ be the common edge label of the edges between the edges in $$M_i$$ and $$N_i$$, that is, all variables $$x_{kl}$$ with $$k \in M_i$$ and $$l \in N_i$$ appear with exponent $$p^{a_i}$$ in $$f$$. By basic linear algebra (Lemma 3.7) any $$q$$-tuple of diagonal blocks $$A_i \in K^{M_i \times N_i}$$ for $$i=1,\ldots ,q$$ that are all of rank $$\leqslant r$$ can be extended to a matrix $$A \in K^{m \times n}$$ of rank at most $$r$$, and hence, $$f$$ vanishes on such a tuple $$(A_1,\ldots ,A_q)$$. Now applying the field automorphism $$\alpha \mapsto \alpha ^{p^{-a_i}}$$ to all entries in $$A_i$$ yields a matrix $$\tilde{A}_i$$ which is again of rank $$r$$, and hence $$f$$ vanishes on the $$q$$-tuple $$(\tilde{A}_1,\ldots ,\tilde{A}_q)$$. But this means that the polynomial $$\tilde{f}$$ obtained from $$f$$ by replacing each $$x_{kl}^{p^{a_i}}$$ (with $$(k,l) \in M_i \times N_i$$) by $$x_{kl}$$ vanishes on $$(A_1,\ldots ,A_q)$$. By construction, $$\tilde{f}$$ vanishes on all rank-$$r$$ matrices, has $$(r+1)!$$ terms and is now multi-linear in the $$m$$ rows and $$m$$ columns. We are done if we can show that $$q=1$$, $$M_1=N_1=[r+1]$$, and $$\tilde{f}$$ is a scalar multiple of the $$(r+1) \times (r+1)$$-determinant.

Without loss of generality, we have

We again proceed by induction on $$r$$. First consider the base case where $$r=2$$. By Remark 3.2, each $$\tilde{f}_j$$ is of the form a constant times $$x^\alpha -x^\beta$$ where $$\alpha ,\beta \in {\mathbb {Z}}^{[m-1] \times ([m] \setminus \lbrace j\rbrace )}$$ are permutation matrices. Thus, $$\tilde{f}$$ itself has six monomials of the form $$x^\gamma$$, where $$\gamma \in {\mathbb {Z}}^{[m] \times [m]}$$ is a permutation matrix, and these terms have the property that for each $$\gamma$$, there is precisely one $$\gamma ^{\prime } \ne \gamma$$ whose last row agrees with that of $$\gamma$$. This argument applies to all rows. Furthermore, $$\tilde{f}$$ vanishes on all rank-2 matrices, hence in particular on the matrix $$(a_i+b_j)_{i,j}$$ where $$a$$ and $$b$$ are vectors of variables. Evaluating $$x^\gamma$$ on this matrix yields

If $$r \geqslant 3$$, then, by induction, each $$\tilde{f}_j$$ is a one-term multiple of an $$r$$-minor in the $$[m-1] \times ([m] \setminus \lbrace j\rbrace )$$-submatrix, and a similar expansion exists for all rows and columns. Then Lemma 3.6 below shows that $$m=r$$ and that $$f$$ is a scalar multiple of the $$(r+1)$$-minor, as desired.$$\Box$$

Lemma 3.5.Let $$n$$ be a natural number, $$S_n$$ the symmetric group, and $$P \subseteq S_n$$ a subset with $$|P|=6$$ such that for all $$I,J \subseteq [n]$$, the set $$\lbrace \pi \in P \mid \pi (I)=J\rbrace$$ has cardinality 0 or $$\geqslant 2$$, and cardinality equal to 0 or 2 if $$|I|=|J|=1$$. Then $$n=3$$ and $$P=S_3$$.

For the following proof, we thank Rob Eggermont.

Proof.The assumptions on $$P$$ are preserved under left and right multiplication, that is, replacing $$P$$ by $$\tau P \sigma ^{-1}$$ for any $$\tau ,\sigma \in S_n$$. Using left and right multiplication, we may assume that $$P$$ contains the identity element $$e$$. Under this additional assumption on $$P$$, we may not use left and right multiplication anymore, but we may still use conjugation. The set $$P$$ contains precisely one other element, which we dub $$\pi _{23}$$, that maps $$\lbrace 1\rbrace$$ to $$\lbrace 1\rbrace$$, and after conjugating we may assume that $$\pi _{23}(2)=3$$.

The set $$\lbrace \pi \in P \mid \pi (\lbrace 1,2\rbrace )=\lbrace 1,2\rbrace \rbrace$$ has cardinality at least 2, contains $$e$$ and hence contains at least one further element, which we dub $$\pi _{12} \ne e$$. This does not map $$\lbrace 1\rbrace$$ to $$\lbrace 1\rbrace$$, and hence $$\pi _{12}$$ interchanges 1 and 2. Similarly, $$P$$ contains an element $$\pi _{13}$$ which interchanges 1 and 3. Furthermore, since $$\pi _{12}(2)=1$$, $$P$$ contains a further element $$\pi _{132} \ne \pi _{12}$$ that maps $$\lbrace 2\rbrace$$ to $$\lbrace 1\rbrace$$, and since $$\pi _{13}(3)=1$$, $$P$$ contains one further element $$\pi _{123} \ne \pi _{13}$$ that maps $$\lbrace 3\rbrace$$ to $$\lbrace 1\rbrace$$.

Now $$\pi _{12},\pi _{13},\pi _{132},\pi _{123}$$ do not map $$\lbrace 2,3\rbrace$$ to itself, but $$e$$ does, hence so does $$\pi _{23}$$. The following summarises what we know about the permutations so far:

The set $$\lbrace 1,k\rbrace$$ for $$k > 3$$ is mapped to itself by $$e$$; hence, there is at least one other element of $$P$$ that does so, and $$\pi _{12},\pi _{13},\pi _{132},\pi _{123}$$ clearly do not, so $$\pi _{23}(k)=k$$ and $$\pi _{23}$$ is the transposition (2,3).

The set $$\lbrace 3\rbrace$$ can only be mapped to $$\lbrace 2\rbrace$$ by ($$\pi _{23}$$ and) $$\pi _{132}$$, so we find that $$\pi _{132}(3)=2$$. Then, apart from $$e$$, $$\pi _{12}$$ is the only element of $$P$$ that can map $$\lbrace 3\rbrace$$ to itself, so it must do so: $$\pi _{12}(3)=3$$. Now $$\lbrace 3,k\rbrace$$ for $$k>3$$ is mapped to itself by $$e$$ and the only other element that can potentially do so is $$\pi _{12}$$, so $$\pi _{12}=(1,2)$$. Using $$\lbrace 2,k\rbrace$$ instead, we find that $$\pi _{13}=(1,3)$$.

Now $$\pi _{12}$$ maps $$\lbrace 2,k\rbrace$$ for $$k>3$$ to $$\lbrace 1,k\rbrace$$, and the only other element that can do so is $$\pi _{132}$$, so we find that $$\pi _{132}=(1,3,2)$$. Similarly, $$\pi _{13}$$ maps $$\lbrace 3,k\rbrace$$ for $$k>3$$ to $$\lbrace 1,k\rbrace$$, and the only other element that can do so is $$\pi _{123}$$, hence $$\pi _{123}=(1,2,3)$$.

We have thus established that $$P \subseteq S_3 \subseteq S_n$$, but then $$\pi (k)=k$$ for all $$\pi \in P$$ and $$k>3$$, and this violates the assumption in the lemma that precisely zero or two permutations map $$\lbrace k\rbrace$$ to $$\lbrace k\rbrace$$. It follows that $$n=3$$ and $$P=S_3$$.$$\Box$$

Lemma 3.6.Let $$r \geqslant 3$$ and let $$f \in K[x_{ij} \mid i,j \in [m]]$$ be a polynomial in the entries of a generic matrix $$x = (x_{ij})$$ with the following properties:

*(1)*$$f$$ vanishes on all matrices of rank $$r$$;*(2)*for every row index $$i \in [m]$$, $$f$$ admits an expansion$$$\begin{equation*} f=x_{i,j_1} f_1 + \cdots + x_{i,j_{r+1}} f_{r+1} \end{equation*}$$$where $$j_1<\ldots <j_{r+1}$$ and where each $$f_l$$ is a polynomial in the entries of the $$([m] \setminus \lbrace i\rbrace ) \times ([m] \setminus \lbrace j_l\rbrace )$$-submatrix $$z$$ of $$x$$ of the following form: a scalar times a monomial times some $$r$$-minor of $$z$$;*(3)*and similarly for column indices.

Proof.From the expansion, we see that each variable in $$f$$ is contained in precisely $$r!$$ terms, and that each monomial in $$f$$ is of the form $$x^\gamma$$ with $$\gamma$$ an $$m \times m$$-permutation matrix.

Next we count variables. Since $$f$$ contains $$r+1$$ variables in each row, the total number of variables in $$f$$ equals $$m(r+1)$$. After permuting the columns of $$x$$, we may assume that the expansion along the first row looks as follows:

For a contradiction, assume that $$y$$ is *not* contained in the first $$r+1$$ columns. Then we can permute the first $$r+1$$ columns of $$x$$ so that $$\det _{r+1}$$ is not contained in the first $$r+1$$ columns and uses $$r$$ consecutive columns with labels in $$[m] \setminus \lbrace r+1\rbrace$$ with at least one label larger than $$r+1$$. Then we can further arrange the last $$m-1$$ rows of $$x$$ so that the rows in $$\det _{r+1}$$ are consecutive, and the variables in $$u_{r+1}$$ are arranged pointing in a down-right direction as do the black squares in Figure 1, with those in the first $$r$$ columns coming in rows before those of $$\det _{r+1}$$, and those beyond the first $$r+1$$ columns coming in rows after $$\det _{r+1}$$.

Now consider the leading monomial of $$f$$ in the lexicographic order. It is the product of the following factors: $$x_{1,r+1}$$, $$u_{r+1}$$ consisting of the black variables in Figure 1, and the darker grey variables on the anti-diagonal of the $$r \times r$$-determinant in $$f_{r+1}$$. But this is not divisible by the leading monomial of any $$(r+1)$$-minor, a contradiction showing that $$y$$ is contained in the first $$r+1$$ columns of $$x$$.

Then $$y$$ is, in fact, an $$r \times (r+1)$$ submatrix in the first $$r+1$$ columns of $$x$$; indeed, if it were an $$(r+1) \times r$$-submatrix, then for any column index $$j \in [r+1]$$ appearing in $$y$$, $$y$$ could not contain the $$r$$-minor $$\det _j$$ in $$f_j$$, simply because $$y$$ is too narrow.

We relabel the rows such that $$y$$ is the submatrix of $$x$$ labelled by $$\lbrace 2,\ldots ,r+1\rbrace \times [r+1]$$. Then each $$f_j$$ is the determinant $$\det _j$$ of the $$\lbrace 2,\ldots ,r+1\rbrace \times ([r+1]\setminus \lbrace j\rbrace )$$-submatrix of $$x$$ times a constant $$c_j$$ times a monomial $$u_j$$ with a variable from each of the last $$m-r-1$$ rows and the last $$m-r-1$$ columns. We claim that all $$u_j$$ are equal. Indeed, let $$g$$ be $$c_{r+1} u_{r+1}$$ times the $$[r+1] \times [r+1]$$-subdeterminant of $$x$$. Then

We conclude that $$h=0$$, and this implies that all $$u_j$$ are equal to $$u_{r+1}$$. But then $$f$$ involves only one variable from each of the last $$m-(r+1)$$ rows. Since it also contains $$r+1$$ variables from each of these, we conclude that $$m=r+1$$, and $$f$$ is a scalar multiple of the determinant.$$\Box$$

We conclude this section with the following simple matrix completion problem.

Lemma 3.7.Let $$m,m_1,m_2,r \geqslant 0$$ be non-negative integers, and suppose that $$m=m_1+m_2$$. Denote by $$X_m$$ the variety of $$m \times m$$-matrices of rank at most $$r$$. Then the projection $$X_m \rightarrow X_{m_1} \times X_{m_2}$$ that maps a matrix to its diagonal blocks is surjective.

In the proof of Theorem 3.1, the corresponding statement is used with $$q$$ factors, and this follows by induction from the case $$q=2$$.

Proof.Let $$(A_1,A_2) \in X_{m_1} \times X_{m_2}$$. Then $$A_i=B_i \cdot C_i$$ for certain $$B_i \in K^{m_i \times r}, C_i \in K^{r \times m_i}$$. But then

### 3.1 Relations to polynomial identity testing

After the first version of this paper was posted, Robert Andrews pointed out to us that Theorem 3.1 has a (modest) application to polynomial identity testing for sparse polynomials. Consider the subset $$P_{t,N}\subset {\mathbb {Q}}[y_1,\ldots ,y_N]$$ of non-zero polynomials with fewer than $$t$$ terms. For our restricted purpose, a *hitting set generator* for $$P_{t,N}$$ is a polynomial map $$\varphi \colon {\mathbb {Q}}^M\rightarrow {\mathbb {Q}}^N$$ such that $$f \circ \varphi$$ is non-zero for every $$f \in P_{t,N}$$. One typically wants $$M$$ to be much smaller than $$N$$ and the components of $$\varphi$$ to be easy-to-evaluate polynomials in $$M$$ variables.

For another link of our work to polynomial identity testing, we refer to [1], where it is shown that any non-zero element $$f$$ in the ideal generated by $$(r+1) \times (r+1)$$-minors can be used as an oracle in the construction of a small circuit that approximately computes the $$s \times s$$-determinant, for $$s=\Theta (r^{1/3})$$. This can be understood as expressing that such a polynomial has high *border complexity*, a different measure of complexity than the number of terms considered in this paper.

## 4 NO SHORT POLYNOMIALS VANISH ON BOUNDED-RANK SKEW-SYMMETRIC MATRICES

We now focus on square and skew-symmetric matrices $$A$$; it is well known that these have even rank. The coordinates on the space of skew-symmetric $$n \times n$$-matrices are, say, the $$\binom{n}{2}$$ matrix entries strictly below the diagonal.

Let $$r$$ be an even integer. If $$A$$ has rank at most $$r$$, then in particular all principal $$(r+2)$$-Pfaffians vanish on $$A$$. These Pfaffians have $$(r+1)!!=(r+1) \cdot (r-1) \cdots \cdot 1$$ terms, in bijection with the perfect matchings in the complete graph on $$r+2$$ vertices. This is fewer than the $$(r+1)!$$ from the previous section, except when $$r=0$$, when the two agree. The following theorem says that there are no shorter polynomials.

Theorem 4.1.Let $$r$$ be even and let $$m \geqslant r$$. There is no non-zero polynomial vanishing on all skew-symmetric $$m \times m$$-matrices of rank $$\leqslant r$$ that has fewer than $$(r+1)!!$$ terms. Furthermore, any polynomial with $$(r+1)!!$$ terms that vanishes on all skew-symmetric $$m \times m$$-matrices of rank $$r$$ is a one-term multiple of a $$p^e$$th power of some principal $$(r+2)$$-Pfaffian, for some $$e \in {\mathbb {Z}}_{\geqslant 0}$$.

Before proceeding with the proof, we record an immediate consequence of the theorem.

Corollary 4.2.Let $$r$$ be even and let $$m \geqslant r$$. For any field $$L$$, the ideal $$I$$ in the polynomial ring $$K[x_{ij} \mid 1 \leqslant i<j \leqslant m]$$ generated by the $$r$$-Pfaffians of the matrix $$x$$ does not contain polynomials with fewer than $$(r+1)!!$$ terms, and the only polynomials in $$I$$ with $$(r+1)!!$$ terms are those in Theorem 4.1.

Proof.Such a polynomial vanishes on all skew-symmetric matrices in $$K^{m \times m}$$, where $$K$$ is an algebraic closure of $$L$$. Now apply Theorem 4.1.$$\Box$$

Proof of Theorem 4.1.The proof proceeds along the same lines as that of Theorem 3.1. Again, we proceed by induction on $$r$$. For $$r=0$$, the 2-Pfaffians are precisely the matrix entries, which, of course, are the shortest non-zero polynomials vanishing on the zero matrix.

**Part 1: Proof of the lower bound** $$(r+1)!!$$ Assume that $$r \geqslant 2$$ and decompose

Now all $$f_\alpha$$ vanish on all skew-symmetric matrices $$A$$ of rank at most $$r-2$$. Indeed, for an arbitrary row vector $$u \in K^{m-1}$$, the skew-symmetric matrix

We may further assume that no $$f_\alpha$$ vanishes identically on rank-$$r$$ skew-symmetric matrices; otherwise, we would replace $$f$$ by $$f_\alpha$$. Pick a very general skew-symmetric matrix $$A \in K^{(m-1) \times (m-1)}$$ of rank $$r$$. Then $$f_{\alpha }(A) \ne 0$$ for all $$\alpha$$, and we claim that $$f(A,x_m)$$ is a polynomial that vanishes identically on the row space of $$A$$. Indeed, if $$u$$ is in the row space of $$A$$, then appending it to $$A$$ as an $$m$$th row does not increase the rank of $$A$$, and then appending $$-u^T$$, along with a zero, as the last column, could only increase the rank by 1, but since a skew-symmetric matrix has even rank, it does not. Hence $$f$$ vanishes on the resulting matrix, and thus, $$f(A,x_m)$$ vanishes on the row space of $$A$$. By Proposition 2.1, at least $$r+1$$ of the $$f_\alpha$$ are non-zero. Therefore, $$f$$ has at least $$(r+1) \cdot ((r-1)!!)=(r+1)!!$$ terms, as desired.

**Part 2: Proof of the characterisation**. Assume that equality holds. By Proposition 2.1, after dividing $$f$$ by a monomial in the variables of the last row, discarding rows (and corresponding columns) on which $$f$$ does not depend, and rearranging columns if necessary, the $$x_m^\alpha$$ are equal to $$x_{m,i}^{p^a}$$ for some common exponent $$a$$. The same applies to all rows. Like in the case of ordinary matrices, we construct an undirected graph $$\Gamma$$, now not necessarily bipartite, on $$[m]$$ in which $$\lbrace i,j\rbrace$$ is an edge if and only if $$x_{ij}$$ appears in $$f$$. The exponents $$a$$ are constant on the connected components of $$\Gamma$$, and by the same argument as in the proof of Theorem 3.1, now using Lemma 4.4 below for the matrix completion, we may replace $$f$$ by an $$\tilde{f}$$ which is linear in the rows. By Lemma 4.3 below, $$m=r+2$$ and $$\tilde{f}$$ is a scalar multiple of a Pfaffian; in particular, $$\Gamma$$ is connected and $$f$$ is a $$p^a$$th power of $$\tilde{f}$$.$$\Box$$

Lemma 4.3.Let $$r \geqslant 2$$ be even. Assume that $$f$$ is a polynomial in the entries of a generic skew-symmetric matrix $$x=(x_{ij})_{ij}=(-x_{ji})_{ij}$$ with the following properties:

*(1)*$$f$$ vanishes on all skew matrices of rank $$r$$ and*(2)*for every row index $$i$$, $$f$$ admits an expansion$$$\begin{equation*} f=x_{i,j_1} f_1 + \cdots + x_{i,j_{r+1}} f_{r+1} \end{equation*}$$$where $$j_1<\ldots <j_{r+1}$$ are all distinct from $$i$$ and where each $$f_l$$ is a polynomial in the entries of the $$([m] \setminus \lbrace i,j_l\rbrace )^2$$-submatrix $$z$$ of $$x$$ with the following shape: a scalar times a monomial times the Pfaffian of a principal $$r \times r$$-submatrix of $$z$$.

Proof.Firstly we count variables: $$f$$ contains precisely $$r+1$$ variables from each row, but every variable appears in two rows, so $$f$$ contains $$m(r+1)/2$$ variables in total. Thus, $$m$$ is even.

On the other hand, consider the expansion along the first row:

Hence in total we see $$(r+1)(m-(r+2))/2$$ distinct variables in $$u_1,\ldots ,u_{r+1}$$. Adding to these $$r+1$$ variables $$x_{1,j_l}$$, there are only $$\binom{r+1}{2}$$ variables left for the $$r+1$$ Pfaffians $$\operatorname{Pf}_l$$, $$l=1,\ldots ,r+1$$. This is only possible if those Pfaffians are the sub-Pfaffians of a principal $$(r+1) \times (r+1)$$-submatrix $$y$$ of the $$([m]\setminus \lbrace 1\rbrace )^2$$-submatrix of $$x$$.

Let $$J \subseteq [m] \setminus \lbrace 1\rbrace$$ be the set of indices labelling the columns (and rows) of $$y$$. We have $$|J|=r+1$$, and claim that $$J=\lbrace j_1,\ldots ,j_{r+1}\rbrace$$. Suppose not, and then let $$\operatorname{Pf}_i$$ be the Pfaffian of a matrix involving a column index $$j \in J \setminus \lbrace j_1,\ldots ,j_{r+1}\rbrace$$. After applying a permutation of $$[m] \setminus \lbrace 1\rbrace$$ to rows and columns, we may assume that $$i=r+1$$ and that $$j=m>j_{r+1}=m-1$$. After applying a further permutation of $$\lbrace 2,\ldots ,j_{r+1}-1\rbrace =\lbrace 2,\ldots ,m-2\rbrace$$, we may assume that $$\operatorname{Pf}_{r+1}$$ is the Pfaffian of the principal submatrix with columns $$m-r,m-r+1,\ldots ,m-2,m$$. The variables in $$u_{r+1}$$ encode a partition of $$\lbrace 2,\ldots ,m-r-1\rbrace$$ into pairs. After applying a permutation of this set to rows and columns, we may assume that these pairs are $$\lbrace 2,3\rbrace ,\lbrace 4,5\rbrace ,\ldots ,\lbrace m-r-2,m-r-1\rbrace$$. See Figure 2 for an illustration. Now the leading monomial of $$f$$ equals $$x_{1,j_{r+1}}=x_{1,m-1}$$ times $$u_{r+1}$$ times the leading monomial of $$\operatorname{Pf}_{r+1}$$; the latter is indicated by dark grey squares in Figure 2. But this monomial contains no $$(r+2)/2$$ variables arranged in a down-left direction; hence, $$f$$ does not lie in the Pfaffian ideal, a contradiction, showing that $$J = \lbrace j_1,\ldots ,j_{r+1}\rbrace$$.

After all $$y$$ has rows and columns by $$j_1,\ldots ,j_{r+1}$$. Applying a permutation of $$[m] \setminus \lbrace 1\rbrace$$ to rows and columns, we may assume that $$j_1=2,j_2=3,\ldots ,j_{r+1}=r+2$$. Then each $$f_l$$ equals a scalar times $$x_{1,l+1}$$ times the Pfaffian $$\operatorname{Pf}_l$$ of the $$([m] \setminus \lbrace 1,l+1\rbrace )^2$$-submatrix of $$x$$, times a monomial $$u_{l}$$ whose variables live in the last $$m-(r+2)$$ rows and columns of $$x$$. Now let $$g$$ be the unique scalar multiple of $$u_{r+1}$$ times the $$(r+2)$$-Pfaffian in the upper left corner of $$x$$ such that in $$h\coloneq f-g$$, the terms involving $$x_{1,r+2}$$ cancel. As in the proof of Theorem 4.1 above, if $$h$$ is non-zero, then for a very general skew-symmetric $$([m] \setminus \lbrace 1\rbrace )^2$$-matrix $$A$$ of rank $$r$$, $$h(x_1,A)$$, where $$x_1$$ stands for the variables in the first row of $$x$$, is a linear polynomial with fewer than $$r+1$$ terms that vanishes on the row space of $$A$$. Again, this contradicts Proposition 2.1.

Hence $$h=0$$ and $$f$$ equals a scalar multiple of $$u_{r+1}$$ times a Pfaffian. But since $$f$$ contains $$r+1$$ variables from all of the last $$m-(r+2)$$ columns, we find that $$m=r+2$$ and $$f$$ is a scalar multiple of a Pfaffian, as desired.$$\Box$$

Lemma 4.4.Let $$m,m_1,m_2,r \geqslant 0$$ be non-negative integers with $$r$$ even, and suppose that $$m=m_1+m_2$$. Denote by $$X_m$$ the variety of skew-symmetric $$m \times m$$-matrices of rank at most $$r$$. Then the projection $$X_m \rightarrow X_{m_1} \times X_{m_2}$$ that maps a matrix to its diagonal blocks is surjective.

Proof.Write $$r=2s$$ and $$(A_1,A_2) \in X_{m_1} \times X_{m_2}$$. Then $$A_i=B_i \cdot C_i-C_i^T \cdot B_i^T$$ for certain $$B_i \in K^{m_i \times s}, C_i \in K^{s \times m_i}$$. But then

## 5 SYMMETRIC MATRICES

An $$(r+1)$$-minor $$\det x[I,J]$$ of a symmetric matrix of variables can have various numbers of terms, depending on $$|I \cap J|$$: if $$I \cap J=\emptyset$$, then this determinant has $$(r+1)!$$ terms, while for the other extreme, where $$I=J$$, the number of terms equals the number of collections of necklaces that can be made with $$n$$ distinct beads; much less than $$(r+1)!$$. These counts assume that $$\operatorname{char}K \ne 2$$ since the coefficients in the determinant for $$I=J$$ are (plus or minus) powers of 2.

We guess that, if $$\operatorname{char}K \ne 2$$, then in the ideal generated by all $$(r+1)$$-minors, the shortest polynomials are those of the form $$\det (x[I,I])$$ with $$I \subseteq [n]$$ of size $$r+1$$. But to prove this, one would like to perform a Laplace expansion like was used in the proofs of Theorems 3.1 and 4.1. Such a Laplace expansion, in the symmetric case, naturally involves determinants of matrices $$x[I^{\prime },J^{\prime }]$$ with $$I^{\prime } \ne J^{\prime }$$, and so to prove our guess, one would probably need to work with a stronger induction hypothesis. At present, we do not know how to approach this challenge.

## ACKNOWLEDGEMENTS

The authors thank Rob Eggermont for useful conversations and his proof of Lemma 3.5. JD was partly supported by Vici Grant 639.033.514 from the Netherlands Organisation for Scientific Research (NWO) and by project grant 200021_191981 from the Swiss National Science Foundation (SNSF). TK and FW were supported by the German Research Foundation DFG – 314838170, GRK 2297 MathCoRe. FW is supported by a Mathematical Institute Award at Oxford University.

Open access funding enabled and organized by Projekt DEAL.

## JOURNAL INFORMATION

The *Bulletin of the London Mathematical Society* is wholly owned and managed by the London Mathematical Society, a not-for-profit Charity registered with the UK Charity Commission. All surplus income from its publishing programme is used to support mathematicians and mathematics research in the form of research grants, conference grants, prizes, initiatives for early career researchers and the promotion of mathematics.