a multivariate version of the vandermonde determinant identity

A MULTIVARIATE VERSION OF THE VANDERMONDE DETERMINANT
IDENTITY
ITAÏ
Abstract.
BEN YAACOV
We give a multivariate version of the Vandermonde determinant identity, measuring whether a
family of points in projective space are in as general a position as possible.
The Vandermonde determinant identity asserts that in any commutative ring R,

1
1

det  .
 ..
1
(1)
x0
x1
..
.
xd

. . . xd0
Y
. . . xd1 

=
(xj − xi ).
.. 
.  i<j≤d
. . . xdd
We may state it equivalently by replacing each xi with a formal unknown Xi and work in Z[X]. We shall
say that this instance of the identity is in degree d, and since each row depends on a single unknown, it is
in (ane) dimension one. Homogenising we get the projective dimension one version, namely
 d
X0
X1d

det  .
 ..
(2)
Xdd
X0d−1 Y0
X1d−1 Y1
...
...
Xdd−1 Yd

Y0d
Y
Y
Y1d 
Xi

=
(Xi Yj − Xj Yi ) =
det
.. 
X
j
. 
...
Ydd
..
.
i<j≤d
i<j≤d
Yi
.
Yj
What about a version of (2) in higher projective dimension? In dimension n one may seek an identity of the
form

(3)
det(A) =
Y
Xi0 ,0
...
Xi0 ,n
Xin ,0
...
Xin ,n

det 
i0 <...<in
..
.

.. 
. ,
where the columns of A should correspond to monomials of degree d in n + 1 variables. The right hand
side does not vanish if and only if the family of rows, viewed as projective coordinates of points in Pn , is in
general position, that is, if every sub-family thereof of size n is in general position.
Let R be a commutative ring, n, m, d ∈ N. Order monomials of total degree d in n+1 variables
by inverting the lexicographic ordering on the sequences of exponents (so X0d < X0d−1 X1 < . . . < X0d−1 Xn <
n+d
X0d−2 X12 < . . .), and dene ν d : Rn+1 → R( n ) as the corresponding Veronese embedding, that is, νi (x) is
the ith monomial applied to x. We extend ν d to a map Mm×(n+1) (R) → Mm×(n+d
(R) by applying ν d to
n )
each row. Notice that n is determined by the argument so we shall use the same notation for dierent values
of n. When d is clear from the context we just write ν .
Notation 1.
Thus one may expect A in (3) to be νX where X is a matrix of size n+d
× (n + 1) (and the right hand
n
side is the product of minors of X of order n + 1). This, however, seems to be irremediably false. The
obstacle is that in the dimension one case, the matrix X is more or less invariant under an operation which
therefore passes unnoticed, although it plays a crucial role for n > 1.
Mathematics Subject Classication. 15A15.
Key words and phrases. Vandermonde ; determinant.
2010
Author supported by ANR projects GruPoLoCo (ANR-11-JS01-008) and ValCoMo (ANR-13-BS01-0006).
Revision
1819
of
5th May 2014.
1
Let R be a commutative ring, n, m, d ∈ N.
(i) We dene µ : Mm×(n+1) (R) → M(mn)×(n+1) (R) by sending a matrix X to the matrix of minors of X
of order n. Minors are ordered by the lexicographic ordering on the sequences rows/columns which
are omitted.
(ii) We dene µ0 : Mm×(n+1) (R) → R by sending a matrix X to the product of minors of X of order
n + 1.
Again, n is determined by the arguments.
Notation 2.
In projective dimension one we have

X0
Y0
Xd
Yd

X =  ...

Yd
 ..
µX =  .
Y0

.. 
. ,
Xd

.. 
. ,
X0
and (2) asserts that det(νX) = det(νµX) = µ X . In higher projective dimension this generalises as follows.
0
Let R be a commutative ring, n, d ∈ N, and let X ∈ M(n+d)×(n+1) (R). Then ν d µX is a square
matrix of order n+d
n , and the Vandermonde identity of degree d and dimension n holds:
Theorem 3.
(4)
det(ν d µX) = (µ0 X)n .
Let n, d and X be as in the statement of Theorem 3. Then
(i) Adding one column of X , times a scalar, to another, does not change either side of (4).
n+d
(ii) Multiplying a column of X by a scalar α multiplies both sides of (4) by αn(n+1) .
Lemma 4.
Proof. For the rst assertion, adding a multiple of a column in X to another amounts to a similar operation
(with columns reversed) on µX and to a sequence of several such operations on νµX . Thus the left hand
side does not change, and clearly the right hand side is not changed either. For the second assertion, again
for the right hand side this is clear. For the left hand side, multiplying a column of X by α amount to
multiplying by α the n other columns of µX . Multiplying all columns of a matrix Y by α would result in
n+d
multiplying all columns of νY by αdand det(νY ) by αd( n ) . Multiplying n columns by α therefore leaves
nd n+d
= n n+d
us with exponent n+1
n
n+1 .
Proof of Theorem 3. We proceed by induction on (n, d). When n = 1, we have already seen that (4) is a
special case of the usual Vandermonde identity. When d = 1, (4) is a consequence of the fact that for a
square matrix A of order n + 1, the determinant of the cofactor matrix of A is det An , and of the observation
that the cofactor matrix is obtained from ν 1 µX = µX by changing the signs of the same number of rows
and columns. When n, d > 1, it will suce to prove (4) in the case where R is an integral domain (an even
more specically, a polynomial ring over Z).
If the rst row of X vanishes then so do both sides of (4) and we are done. Otherwise, by Lemma 4 (and our
assumption that A is an integral domain) we may assume that the rst row is (1, 0, . . . , 0). Let Y be X without
n+d−1
this row and let Z be X with both rst row and column dropped. Let also ν 0 = ν d−1 : Rn+1 → R( n ) .
Then by the (n, d − 1) and (n − 1, d) cases, respectively,
det(ν 0 µY ) = (µ0 Y )n ,
det(νµZ) = (µ0 Z)n−1 .
Now,





µX = 0

 ..
.
0

µY
µZ




,








νµX = 0

 ..
.
0
2

νµY
...
0
...
0
..
.
νµZ




,



where zeros appear in n+d−1
rows, and in νµY in all columns corresponding to monomials in which the rst
n−1
unknown appears. The matrices νµZ and νµY have dimensions n+d−1
× n+d−1
and n+d−1
× n+d−1
n−1
n−1
n
n−1 ,
respectively, and the left block of νµY consists of ν 0 µY with the ith row multiplied by (µY )i,0 . Thus
det(νµX) = det(νµZ) det(ν 0 µY )
Y
(µY )i,0 = (µ0 Y )n (µ0 Z)n−1 (µ0 Z) = (µ0 X)n ,
i
as desired.
Let us try to give some sense to (4) in terms of exterior algebra. First, if E is any R-module let S d (E)
denote the d-fold symmetric tensor power of E . This denes a functor with S d (R) = R, giving rise to the
Veronese map S d : E ∗ → S d (E)∗ (which is not a morphism unless d = 1).
V
Now let E = Rn+1 , with canonical base e = (e0 , . . . , en ), which allows us to identify n E with E ∗ . The
n+d
d
∧s
matrix X can then be identied with a sequence
Vn x ∈ E∗ . For s ∈ [n+d] let x denote the wedge product
∧s
of (xi : i ∈/ s), with i increasing Then x ∈
E = E is the sth row of µX (up to some changes of sign) and
V
Vn+1 ∼ VN d
d ∧s
the sth row of νµX
is
S
(x
)
.
Thus,
under
the
identications n E ∼
E=
S (E)∗ ∼
= E ∗ and
=R
n+d
(where N = n ), as determined by the basis e, (4) takes the somewhat more explicit form
(5)
^
Y
S d (x∧s ) = ±
s∈[n+d]d
(x∧t )n .
t∈[n+d]d−1
A closer inspection reveals that the identications only depend on e∧ ∈ n+1 E , whence (essentially)
Lemma 4.
Evaluating
the right hand side as ascalar is fairly straightforward: formally, the expression there belongs
0 V
nN 0
to S nN ( n+1 E), where N 0 = n+d
to the isomorphism
n+1 and we only need to apply the functor S
Vn+1
Vn
Vn+1
E → R. On the left hand side we have a natural map ι :
E → Hom(E,
E), and to ιx∧s we
VN
Vn+1 d
d
d
apply the functor S , so the entire expression belongs to
Hom S (E), S (
E) . VThis is inexorably
a morphism, which needs to be fed something, and the obvious object to feed it with is s∈[n+d]d xs where
Q
Vn
xs = i∈s xi ∈ S d (E). Generally, any morphism ϕ : E ⊗F → G gives rise to a morphism ϕdet :
E⊗
Vn
n
det ∧
∧
F → S (G) dened by ϕ (x ⊗ y ) = det ϕ(xi ⊗ yj ) . Following this notation, we have
V

(6)

^


S d (ιx∧s ) ·det 
s∈[n+d]d

^
Y
xs  = ±
s∈[n+d]d
(x∧t )n+1 .
t∈[n+d]d−1
Indeed, this holds in arbitrary E , since S d (ιx∧s )xs = ± t∈[s]d−1 x∧t and S d (ιx∧s )xs = 0 when s 6= s0 . If
we changed the sign of x∧s whenever #{(i, j) : i ∈ s, j ∈/ s, i < j} is odd (as we should have), the sign in (6)
would be +. When E is free of rank n + 1 (and R is an integral domain, but then the general case follows),
we can cancel out (5) from (6), and obtain the following dual result.
0
Q
When E is free of rank n + 1, let e be a basis for E and use it to identify (up to sign)
S d (E) with R. Then
Corollary 5.
and
VN
(7)
^
xs = ±
s∈[n+d]d
Y
Vn+1
E
x∧t .
t∈[n+d]d−1
In terms of matrices, P
let X ∈ M(n+d)⊗(n+1) (R) and dene η d X as follows. Identify a vector x ∈ Rn+1 with
the linear polynomial xi Yi ∈ R[Y ]. For each choice of d rows, we take the product of these polynomials,
and write down the coecients of the resulting homogeneous polynomial of degree d. Choices of rows are
ordered lexicographically
in the rows taken, and monomials are ordered as earlier. Then η d X is a square
n+d
matrix of order n and det(η d X) = ±µ0 X .
If u ∈ End(E) then (ux)s = S d (u)xs . Since the sign in (7) is constant, we obtain the following (which
can of course also be calculated directly).
Corollary 6.
m+d−1
m
Let E be free of rank m and u ∈ End(E). Then det S d (u) = det u(
3
).
Itaï Ben Yaacov, Université Claude Bernard Lyon 1, Institut Camille Jordan, CNRS UMR 5208, 43
boulevard du 11 novembre 1918, 69622 Villeurbanne Cedex, France
URL: http://math.univ-lyon1.fr/~begnac/
4