• Non ci sono risultati.

Relative Kruskal-Katona theorem

N/A
N/A
Protected

Academic year: 2021

Condividi "Relative Kruskal-Katona theorem"

Copied!
70
0
0

Testo completo

(1)

Dipartimento di Matematica

Corso di Laurea Magistrale in Matematica

Kruskal-Katona and Macaulay

theorems in a Relative setting

Tesi di Laurea Magistrale

Candidata

Relatore

Giulia Codenotti

Prof. Dr. Raman Sanyal

Corelatore

Prof. Roberto Dvornicich

(2)
(3)

1 Simplicial Complexes and Combinatorial Algebra 9

1.1 Combinatorics of simplicial complexes . . . 9

1.2 Basic Notions of Algebra . . . 11

1.2.1 Monomial ideals . . . 11

1.2.2 Hilbert function and Hilbert series . . . 13

1.3 Stanley-Reisner ideals and rings . . . 14

1.4 The exterior algebra . . . 16

1.4.1 Basic notions in the exterior algebra . . . 16

1.4.2 Exterior algebra and simplicial complexes . . . 18

1.4.3 Groebner basis theory in the exterior algebra . . . 18

1.4.4 Generic initial ideals in the exterior algebra . . . 21

1.4.5 Stability properties of generic initial ideals . . . 26

2 The Kruskal-Katona Theorem 29 2.1 Kruskal-Katona from a combinatorial perspective . . . 29

2.2 Kruskal-Katona from an algebraic perspective . . . 36

3 The f -vectors of relative simplicial complexes 47 3.1 Relative simplicial complexes . . . 47

3.2 Construction of compressed relative simplicial complex . . . 49

3.3 Minimality among relative compressed simplicial complexes . . . 50

3.4 The construction is minimal . . . 51

4 The Macaulay Theorem 53 4.1 Combinatorial Macaulay theorem . . . 53

4.2 Algebraic Macaulay theorem . . . 56

4.3 Relative multicomplexes . . . 60

4.4 An extension of the Macaulay Theorem . . . 61 3

(4)
(5)

The simplicial complex is a well studied structure in geometry, topology and combi-natorics. A geometric simplicial complex is a finite, non-empty collection of simplices in Rd which contains all faces of its simplices, and such that the intersection of two simplices is a face of both of them. For example, a simplex along with all its faces is a simplicial complex.

An abstract simplicial complex ∆ is a finite non empty family of subsets of some ground set, say [n] := {1, 2, . . . , n}, with the property that for any face F ∈ ∆, any G ⊆ F , G ∈ ∆ also holds.

If we are only interested in the combinatorics associated to the simplicial complex, that is, in those properties which do not depend on the position in space of the simplices of the complex, but only on the vertices of the complex and which vertices belong to which simplex of the complex, then we can identify a geometric simplicial complex with an abstract one. This can be done by recording the sets of vertices corresponding to each face of the geometric simplicial complex.

An element of the simplicial complex is said to be a face of the complex, the dimension of a face is one less than its cardinality, and the dimension of a simplicial complex is the maximum dimension of its faces.

The f -vector is an example of a combinatorial invariant of a simplicial complex. It is a vector of nonnegative integers whose th entry records the number of i-dimensional faces of the complex. It is a natural question to ask which vectors are f -vectors for some simplicial complex; the Kruskal-Katona Theorem ([8], [7]) gives a complete answer to this. Chapter 2 of this thesis will explore one proof of this theorem, which can be found in [5], but there are many proofs known; for a short proof due to Frankl see [4].

There is a related structure, the relative simplicial complex, which is a family of sets given as the set theoretic difference of a simplicial complex and a subcomplex of it. An introduction to relative simplicial complexes can be found in [10]. Relative simplicial complexes have found applications both in geometry and in combinatorics; for example, in a recent paper of Adiprasito and Sanyal [1], properties of relative

(6)

plicial complexes play a key role in solving the upper bound problem for Minkowski sums of polytopes, which asks for an upper bound on the number of k-dimensional faces of a Minkowski sum of polytopes. As relative simplicial complexes have natu-rally surfaced in relation to many interesting questions, interest has grown in better understanding which properties of simplicial complexes can be ”relativized”.

The study of f -vector of simplicial complexes is well established, as was men-tioned. What can one say about f -vectors of relative simplicial complexes? These are defined in the expected way, as the number of faces of each dimension in the relative complex, or equivalently as the difference between the f -vectors of the sim-plicial complexes constituting the relative complex. Is there a characterization of which vectors are f -vectors of relative simplicial complexes? The answer depends on the precise formulation of the question, and ranges from trivial to interesting. In Chapter 3, we explore possible ways of posing the question and give a simple answer for a non-trivial formulation, which will have a similar flavor to the Kruskal-Katona Theorem.

One interesting aspect of the study of (relative) simplicial complexes is the strong connection that emerges between Combinatorics and Algebra. Indeed, the Kruskal-Katona theorem can be reformulated in algebraic terms as a characterization of Hilbert functions of certain quotients of the exterior algebra. This is done by building a correspondence between simplicial complexes and ideals in the exterior algebra, in a similar way to the construction of the Stanley-Reisner ideal in the polynomial algebra, which will be explained in detail in sections 1.3 and 1.4.2. The connection with Algebra unlocks a wealth of algebraic tools, called Stanley-Reisner theory, which can be used to study simplicial complexes, see for example [10] and [6] for a better understanding of Stanley-Reisner theory.

What is fascinating about the connections between Algebra and Combinatorics is how certain problems have solutions with algebraic tools and solutions with com-binatorial tools, solutions which can be very similar, almost direct translations from the language of Algebra to that of Combinatorics. A prime example of this will be the two proofs we present of the Kruskal-Katona Theorem. While for other prob-lems the situation is quite different; we will see an example in Chapter 4 where an algebraic approach gives a solution for which no combinatorial construction is yet known.

Another important theorem which we study is the Macaulay Theorem. It can be seen as an analogue of the Kruskal-Katona Theorem for multicomplexes. A multi-complex is a structure similar to the simplicial multi-complex, where in the definition we substitute sets with multisets. The Macaulay Theorem is then a characterization of those vectors which are f -vectors of multicomplexes. There is an algebraic

(7)

formu-lation of this theorem as well, as a characterization of Hilbert functions of finitely generated, standard graded K-algebras.

Indeed, it is interesting to know that the Macaulay Theorem was originally formu-lated algebraically, while the Kruskal-Katona Theorem first appeared as a combina-torial statement, and the two closely-related statements might have thus appeared to be very different at first glance. The remarkable similarity between the two theorems is not just in their content: the two proofs that we will present of the Kruskal-Katona Theorem can be very faithfully adapted to proofs of the Macaulay Theorem, as we will show in Chapter 4.

The structure of the thesis is as follows.

Chapter 1 is an introduction to some basic notions on simplicial complexes, mul-ticomplexes and exterior and symmetric algebras which are needed in the following. Chapter 2 will explore a combinatorial and an algebraic proof of the Kruskal-Katona Theorem, emphasizing the similarities and differences between the two ap-proaches.

As mentioned above, Chapter 3 deals with characterizing f -vectors of relative simplicial complexes.

In Chapter 4 outlines of combinatorial and algebraic proofs of the Macaulay Theorem are presented, with the goal of illustrating the parallels to the Kruskal-Katona Theorem. It also contains an extension of the Macaulay Theorem due to Bj¨orner, Frankl and Stanley, and we will discuss whether this extension can also be translated into a theorem in the relative setting.

Chapter 5 then concludes with some open questions and further directions of inquiry.

(8)
(9)

Simplicial Complexes and

Combinatorial Algebra

1.1

Combinatorics of simplicial complexes

A fundamental object in geometry and combinatorics (indeed, all of mathematics!) is the simplicial complex. We will not need the geometric realization of a complex, so it will be sufficient to give the definition of an abstract simplicial complex. Definition 1.1.1. A family ∆ of subsets of {1, 2, . . . , n} =: [n] is a simplicial complex if, for all F ∈ ∆ and all G ⊆ F , G is also in ∆. The sets in ∆ are called faces of ∆.

The dimension of a face F is dim(F )= |F | − 1 and the dimension of the simplicial complex is the greatest dimension of its faces. Vertices and facets are minimal non-empty faces and maximal faces respectively.

A subcomplex Γ of a simplicial complex ∆ is a simplicial complex which is contained in ∆.

To a simplicial complex we associate a vector, called the f -vector (or face vector) which will be the main focus of the theorems of the next chapter. The f-vector of a complex records the number of faces of each dimension of the complex:

Definition 1.1.2. The f -vector of a simplicial complex ∆ of dimension d − 1 is a vector f (∆) = (f0, f1, . . . , fd), where fi is the number of faces of dimension i in ∆.

In some cases we will also need to consider f−1 = 1, which can be seen as counting

the empty face. For example it is needed in the definition of the h-vector: 9

(10)

Definition 1.1.3. The h(∆) = (h0, h1, . . . , hd) of a (d − 1)-dimensional simplicial

complex ∆ is defined though the formula

d X i=0 fi−1(t − 1)d−i = d X i=0 hitd−i.

Thus we can explicitly write each entry of the vector as hk = k X i=0 (−1)k−i d − i d − k  fi−1,

or equivalently we can express the f -vector in terms of the h-vector: fk−1= d−1 X i=0 d − i k − i  hi.

A structure similar to the simplicial complex that we will deal with in the follow-ing is that of a multicomplex. To define this, we first need the concept of a multiset, which is a generalization of a set.

Definition 1.1.4. A multiset S is a finite collection of elements in some ground set (say, N or [n]) which may contain repeated elements. We write a multiset as follows,

{b1, b2, . . . , bk}≤

with elements ordered in weakly increasing order.

The multiplicity of an element of the ground set in S is the number of times this element occurs in S.

The size of a multiset is the number of elements it contains, counting multiplic-ities.

A submultiset of S is a multiset S0 such that the multiplicity of every element of the ground set in S0 is smaller than its multiplicity in S.

With these definitions at hand, which trace faithfully the familiar ones for sets, we can state the definition of a multicomplex:

Definition 1.1.5. A collection of multisets M is a multicomplex if for any multiset S ∈ M and any submultiset S0 of S, S is also in M .

Quite analogously to simplicial complexes, we can define the dimension of a mul-tiset as one less than its size (with multiplicity!), the dimension of a multicomplex as the greatest dimension of its multisets. Again we can define the f -vector,

(11)

Definition 1.1.6. The f -vector of a multicomplex M of dimension d − 1 is a vector f = (f0, f1, . . . , fd), where fi is the number of faces of dimension i in M .

An interesting connection with algebra is the following. Consider a multicomplex M on the ground set [n]. Through the bijection

{b1, b2, . . . , bk}≤ ←→ xb1xb2. . . xbk

we can identify multisets on [n] with monomials in the polynomial ring S = K[x1, . . . , xn].

M then corresponds to a set of monomials closed under taking factors; if we denote by M on(S) the set of all monomials of S, then M on(S) \ M is a K-basis for an ideal IM in S (there is a slight abuse of notation in identifying M with its image in the

above bijection).

In the next section, we introduce terminology and basic definitions needed to explore this combinatorics-algebra correspondence, which will allow us to state the-orems in the language of set theory or in that of algebra, and prove them with tools from one or the other. The theorems of the next chapter are a prime example of this correspondence.

1.2

Basic Notions of Algebra

1.2.1

Monomial ideals

We first recall briefly basic properties of monomial ideals. There is of course much to say on the subject. Here we will only mention what we strictly need for the following; for a more complete introduction see for example [6].

Definition 1.2.1. Let K be a field and S = K[x1, . . . , xn] be the polynomial ring in

n variables over K, as mentioned above. A monomial is any finite product of the variables xα1

1 . . . xαnn with ai non-negative integers.

We denote by Mon(S) the set of all monomials in S. This is a K-basis of S: for each f ∈ S there are unique au ∈ K such that

f = X

u∈M on(S)

auu.

The support of a polynomial f is the set of those monomials which appear in the sum with nonzero coefficients, and we denote this by supp(f ).

(12)

A useful characterization of monomial ideals is the following: Proposition 1.2.3. For an ideal I ⊆ S the following are equivalent:

1. I is a monomial ideal,

2. for all f ∈ S, f ∈ I if and only if supp(f )⊆ I.

Proof. We first prove 1. =⇒ 2. It is clear that if for some polynomial f , supp(f ) ⊆ I then f ∈ I. For the converse, let f ∈ I be any element of I. Since I is generated by monomials, there exist u1, . . . , um ∈ I monomials and polynomials f1, . . . , fm ∈ S

such that f = m X i=1 uifi.

We can express each of the fi as a linear combination of monomials, since there form

a K-basis of the polynomial algebra: fi =Pkj=1aijvij. Thus

f = m X i=1 k X j=1 aijvijui,

that is, supp(f ) = {vijui|1 ≤ i ≤ m, 1 ≤ j ≤ k}. Then clearly supp(f ) ⊆ I, since

ui ∈ I.

To show 2. =⇒ 1., consider a set of generators for I. By 2., the union of the supports of these generators is also a set of generators.

A straightforward but extremely useful corollary that we will use often is

Corollary 1.2.4. Let I be a monomial ideal. The residue classes of monomials not in I form a K-basis of the quotient algebra S/I.

The last basic definition we will need is that of a squarefree module:

Definition 1.2.5. A monomial is called squarefree if all variables appear with multiplicity at most 1. A monomial ideal is called squarefree monomial ideal if it is generated by squarefree monomials.

(13)

1.2.2

Hilbert function and Hilbert series

Let K denote a field, and R = L

i≥0Ri a graded K-algebra. An element f ∈ R is

called homogeneous if f ∈ Ri for some i.

R is called standard graded if it is a finitely-generated K-algebra and all its generators are of degree 1. It is easy to see that any standard graded K-algebra with n generators is isomorphic to a quotient of the polynomial ring K[x1, . . . , xn]/I,

where I is a graded ideal, that is, an ideal which is generated by homogeneous polynomials.

If M is a finitely generated Z-graded R-module, all the graded components Mi

are finite-dimensional K-vector spaces. Elements x ∈ Mi for some i ∈ Z are called

homogeneous. Any element of the module can be uniquely written as a finite sum of homogeneous elements. We will only consider positively graded modules, that is, modules M such that Mi = 0 for all i < 0.

Definition 1.2.6. The Hilbert function of a finitely generated graded R-module M is the function

H(M, −) : Z → Z, i 7→ H(M, i) = dimKMi.

The formal Laurent series

HM(t) =

X

i∈Z

H(M, i)ti is called the Hilbert series of M .

Example 1.2.7. What is the Hilbert function and series of S = K[x1, . . . , xn]? The

monomials of degree i form a K-basis for Si, thus

H(S, i) = dimKSi = n + i − 1 n − 1  , and HS(t) = X i≥0 n + i − 1 n − 1  ti = 1 (1 − t)n.

For arbitrary graded R-modules the situation is not much worse than the simple example we have seen:

Theorem 1.2.8 (Hilbert). Let K be a field, R a standard graded K-algebra and M a nonzero finitely generated graded R-module of dimension d. Then

(14)

(a) there exists a Laurent polynomial QM(t) ∈ Z[t, t−1] such that QM(1) > 0 and

HM(t) =

QM(t)

(1 − t)d;

(b) there exists a polynomial PM(x) ∈ Q[x] of degree d − 1 such that for all i >

deg QM − d,

H(M, i) = PM(i).

This polynomial is called the Hilbert polynomial of M . For a proof see for example [6].

1.3

Stanley-Reisner ideals and rings

Let ∆ be a simplicial complex on [n]. We will associate to it an ideal I∆ ⊆ S =

K[x1, . . . , xn], called the Stanley-Reisner ideal of ∆.

For a subset F ⊆ [n] denote by xF the squarefree monomial

xF =

Y

i∈F

xi.

Definition 1.3.1. The Stanley-Reisner ideal of ∆ is the monomial ideal generated by those squarefree monomials xF such that F is not a face of ∆,

I∆= hxF : F /∈ ∆i.

The Stanley-Reisner ring of ∆ is defined to be K[∆] = S/I∆.

Proposition 1.3.2. The set of all monomials m = xα1

1 . . . xαnn such that supp(m) ∈

∆ is a K-basis of S/I∆.

Proof. By Corollary 1.2.4, it is sufficient to show that the monomials m such that supp(m) ∈ ∆ are exactly the monomials of S not in I∆.

m = xα1 1 . . . x αn n ∈ I∆⇔ ∃F ⊆ [n], F /∈ ∆ such that xF|xα11. . . x αn n

⇔ ∃F ⊆ [n], F /∈ ∆ such that F ⊆ supp(m) ⇔ supp(m) /∈ ∆,

where the last equivalence follows from the defining property of simplicial complexes: {i ∈ [n] : αi 6= 0} ∈ ∆ if and only if all its subsets are also in ∆.

(15)

Many properties of the simplicial complex ∆ are captured by its Stanley-Reisner ring. Previously we introduced the h-vector of a simplicial complex through a formula involving the f -vector, which we can rewrite as follows:

d X i=0 fi−1ti(1 − t)d−i = d X i=0 hiti.

This definition was given without a combinatorial or geometric interpretation. The following proposition shows that the h-vector provides the link between the Hilbert function of the Stanley-Reisner ring K[∆] and the f -vector. More precisely: Proposition 1.3.3. Let ∆ be a simplicial complex of dimension d − 1 with f -vector (f0, f1, . . . , fd−1). Then

HK[∆](t) =

Pd

i=0fi−1ti(1 − t)d−i

(1 − t)d .

Proof. The monomials not in I∆form a K-basis of K[∆], and the proof of proposition

1.3.2 yields that these monomials are exactly those monomials u such that supp(u) ∈ ∆. Let F ∈ ∆ be any fixed face. Then

{u ∈ Mon(S) : supp(u) = F } = {xFv : v ∈ Mon(K[xj : j ∈ F ])}

Thus,

H(K[∆], i) = | [

F ∈∆

{u ∈ Mon(S) : deg(u) = i, supp(u) = F }| = X F ∈∆ |{v ∈ Mon(K[xj : j ∈ F ]) : deg(v) = i − |F |} = X F ∈∆ H(K[x1, . . . , x|F |], i − |F |).

The terms in the last sum were calculated exactly in example 1.2.7, which allows us to calculate the Hilbert series:

(16)

HK[∆](t) = X i∈Z X F ∈∆ H(K[x1, . . . , x|F |], i − |F |) ! ti = X F ∈∆ X i∈Z H(K[x1, . . . , x|F |], i − |F |)ti−|F | ! t|F | = X F ∈∆ 1 (1 − t)|F |t |F | = d X i=0 fi−1 ti (1 − t)i, as desired.

This proposition, combined with theorem 1.2.8, allows us to tell the dimension of the Stanley-Reisner ring from that of the simplicial complex:

Corollary 1.3.4. Let ∆ be a (d−1)-dimensional simplicial complex. Then dimK[∆] = d.

1.4

The exterior algebra

1.4.1

Basic notions in the exterior algebra

Let K be a field and V an n-dimensional vector space over K. The exterior algebra of V is defined as the quotient of the tensor algebra T (V ) =L∞

i=0V ⊗i

E = Λ(V ) = T (V )/I,

where the ideal I is generated by all elements of the tensor algebra of the form v ⊗ v. An element in E is an equivalence class [v1⊗v2⊗· · ·⊗vi], denoted by v1∧v2∧· · ·∧vi.

E inherits a grading from the tensor algebra. We will denote its graded compo-nents by Ei =ViV ; the i-th graded component is called the i-th exterior power

of V .

From the relation v ∧ v = 0 we deduce that v ∧ w = −w ∧ v: 0 = (v + w) ∧ (v + w) =v ∧ v + w ∧ v + v ∧ w + w ∧ w

=w ∧ v + v ∧ w. (1.1) If e1, . . . , en is a K-basis of V , from relation 1.1 we can see that elements of the form

eF = ej1 ∧ ej2 ∧ · · · ∧ ejk with F = {j1 < j2 < · · · < jk} ⊆ [n] span the vector space

(17)

^

F ⊆[n]

aFeF = 0.

Then for any F ⊆ [n], let FC denotes the complement of F in [n]; then

0 =   ^ F ⊆[n] aFeF  ∧ eFC = aFe1∧ e2∧ · · · ∧ en,

and since e1 ∧ e2∧ · · · ∧ en6= 0, it follows that aF = 0.

In particular we can deduce that dimKEi = ni for i = 0, . . . , n.

Definition 1.4.1. We call the elements eF = ej1 ∧ ej2 ∧ · · · ∧ ejk with F = {j1 <

j2 < · · · < jk} ⊆ [n] the monomials of E.

Remark 1.4.2. It is of some importance that we only call monomials the elements with a plus sign when written in canonical form: thus e2∧ e1 is not a monomial.

This is needed for the uniqueness of the expansion of any element as sum of monomials: observe that there are only finitely many monomials in E, namely 2[n],

and every element f ∈ E can be written uniquely as f = P

F ⊆[n]aFeF. We call

support of f the set supp(f ) = {eF|aF 6= 0}.

Since E is graded, any element f can be written uniquely as f =Pn

i=0fi, with

fi ∈ Ei. The fi are called homogeneous components of f .

Let S = {ei1, . . . , eik} ⊆ E1 be a set of degree one monomials, and T ⊆ Ed be a

set of d-monomials. We will denote by S ∧ T the set of all monomials S ∧ T = {σ(eij∧ u)eij∧ u|1 ≤ j ≤ k, u ∈ T },

where the sign σ(eij∧ u) is chosen so that when written in canonical form, that is,

ordered with indices in increasing order, σ(eij ∧ u)eij ∧ u has positive sign.

Definition 1.4.3. A graded ideal in the exterior algebra J ⊆ E is called a monomial ideal if J is generated by monomials.

Just as in the polynomial ring, a graded ideal is monomial if and only if for any element f ∈ J , all of the monomials of the support of f also belong to J .

(18)

1.4.2

Exterior algebra and simplicial complexes

In this paragraph, we will proceed just as in the case of the polynomial ring and we define the exterior ideal J∆⊆ E of a simplicial complex ∆ by

J∆= heF : F /∈ ∆i.

Then we can define the exterior face ring of a simplicial complex, in analogy to the Stanley-Reisner ring:

Definition 1.4.4. The exterior face ring of a simplicial complex ∆ is the K-algebra

K{∆} := E/J∆.

An important property of the exterior face ring is that it captures the information of the f -vector of ∆:

Proposition 1.4.5. Let ∆ be any simplicial complex, and denote by (f0, f1, . . . , fd−1)

its f -vector. Then

dimKK{∆}i = fi−1.

Proof. J∆ is spanned by the monomials {eF : F /∈ ∆}, which are linearly

indepen-dent. Then the residue classes of all remaining monomials, that is, monomials eF

such that F ∈ ∆, form a K-basis of K{∆}. Then K{∆}iis spanned by monomials eF

with |F | = i and F ∈ ∆, which are monomials corresponding to (i − 1)-dimensional faces of ∆.

Corollary 1.4.6. The Hilbert function of the exterior face ring K{∆} of a simplicial complex ∆ is HK{∆}(t) = n X i=0 fi−1ti.

1.4.3

Groebner basis theory in the exterior algebra

Definition 1.4.7. A monomial order on the exterior algebra E is a total order < on the set of monomials Mon(E) with the properties:

(i) 1 < u for all 1 6= u ∈ Mon(E);

(ii) if u, v ∈ Mon(E) and u < v then u ∧ w < v ∧ w for all w ∈ M on(E) such that u ∧ w 6= 0 and v ∧ w 6= 0.

(19)

Examples of monomial orders which will be used in the following are the lexico-graphic or reverse lexicolexico-graphic order.

The lexicographic order on E induced by e1 > e2 > · · · > en is denoted by

<lex and is defined by eF <lex eG if either |F | < |G| or |F | = |G| and the smallest

element in which F and G differ belongs to F .

The reverse lexicographic order on E induced by e1 > e2 > · · · > en, denoted

by <rev, is defined by eF <lex eG if either |F | < |G| or |F | = |G| and the largest

element in which F and G differ belongs to G.

We will only consider monomial orders which are induced by e1 > e2 > · · · > en,

even when not explicitly stated. We fix one such monomial order to give the following definitions.

Definition 1.4.8. Let f ∈ E be a non-zero element of the exterior algebra, f = P

F ⊆[n]aFeF. The initial monomial of f with respect to < is the largest monomial

in the order < among the monomials of the support of f ; it is denoted by in<(f ).

The leading coefficient of f is the coefficient of in<(f ) in f .

Some useful properties of initial monomials:

Proposition 1.4.9. Let g, f ∈ E be non-zero, and u ∈ Mon(E). Then (i) in<(u ∧ f ) = u ∧ in(f ) if u ∧ in<(f ) 6= 0;

(ii) in<(g ∧ f ) = in<(g) ∧ in<(f ) if in<(g) ∧ in<(f ) 6= 0

Proof. (i) Let v ∈ supp(f ). Then v ≤ in<(f ) and thus by the second property of

monomial orders, if u ∧ v 6= 0 6= u ∧ in<(f ), then u ∧ v ≤ u ∧ in<(f ).

(ii) If v ∈ supp(f ), w ∈ supp(g), then v ≤ in<(f ), w ≤ in<(g). Thus, v ∧ w ≤

in<(f ) ∧ in<(g), if these are non-zero.

Definition 1.4.10. Let J ⊆ E be a nonzero ideal. The initial ideal in<(J ) of J is

the ideal generated by the initial monomials of elements of J : in<(J ) = hin<(f ) : f ∈ J, f 6= 0i

With this definition we can develop the theory of Groebner bases for the exterior algebra, which in many aspects is similar to the polynomial ring case.

Definition 1.4.11. Let J be a nonzero ideal of the exterior algebra. Then a finite set {g1, . . . , gs} of elements of J is called a Groebner basis of J (with respect to

(20)

Just as in the polynomial ring, we can prove that a Groebner basis forms a system of generators for J :

Theorem 1.4.12. Consider J ⊆ E a nonzero ideal and G = {g1, . . . , gs} a Groebner

basis. Then G is a set of generators of J .

Proof. Let f ∈ J , f 6= 0. Then in<(f ) ∈ in<(J ) =< {in<(g1), . . . , in<(gs)} >, so

in<(f ) = w0∧ in<(gi0) for some w0 ∈ Mon(E). If a0 is the leading coefficient of f

and b0 is the leading coefficient of gi0, let

f1 = f − b−10 a0w0∧ gi0 ∈ J.

Observe that the initial monomials with respective coefficients of f and the other summand cancel out, since in<(w0∧ gi0) = w0 ∧ in<(gi0).

Thus either f1 = 0 and we are done, since f is then written as a combination of

the elements of G, or, if f1 6= 0, it has initial monomial in<(f1) < in<(f ).

In this second case, we repeat the procedure above for f1. We will obtain f2 =

f1− b−11 a1w1 ∧ gi1 ∈ J, for some coefficients a1, b1 and some monomial w1; if f2 = 0

we are done, by the same reasoning as before. If not, it must have initial monomial strictly smaller than f1 and we can repeat again.

This process will terminate, since there are finitely many monomials in E and thus finitely many monomials smaller than in<(f ). When it does terminate, say at

step k, then f = b−10 a0w0∧ gi0 + b −1 1 a1w1∧ gi1+ · · · + b −1 k akwk∧ gik

for some ai, bi coefficients and wi monomials, as desired.

Let f1, . . . , fr ∈ J be homogeneous elements such that in<(f1), . . . , in<(fr) form

a K-basis of in<(J ). Then f1, . . . , fr is a K-basis of J , and thus

dimKJi = dimKin<(J )i.

This yields the following:

Corollary 1.4.13. Any ideal J ⊆ E and its initial ideal in<(J ) have the same Hilbert

function

(21)

1.4.4

Generic initial ideals in the exterior algebra

Let K be a field. The Zariski Topology on Kmis defined by setting a Zariski closed

set to be the set of common zeros of a set of polynomials in m variables. If K is a finite field, any subset of Km is both open and closed in the Zariski Topology, and this topology is thus not very interesting. We will always assume K to be infinite.

In the following, we will often use the following property of the Zariski topology: Proposition 1.4.14. If U1, . . . , Uk ⊆ Km are nonempty Zariski open sets, then

U1∩ · · · ∩ Uk 6= ∅.

Remark 1.4.15. An equivalent formulation is that any non-empty Zariski open set is dense.

Proof. It is enough to show the statement for two nonempty Zariski open sets U, U0. Let A = Km \ U and A0 = Km \ U0; these are Zariski closed sets, so there are

polynomials f1, . . . , fk and f10, . . . , f 0

l such that respectively A and A

0 are the set of

common zeros of these polynomials. Then

A ∪ A0 = {x ∈ Km|(fifj0)(x) = 0 for all 1 ≤ i ≤ k, 1 ≤ j ≤ l}.

Let x ∈ U and x0 ∈ U0; there are i and j such that f

i(x) 6= 0 and fj0(x

0) 6= 0. Then

fifj0 is not the zero polynomial, and since we are working over an infinite field K

there must be some x00 ∈ Km such that f ifj0(x

00) 6= 0. Then x00 ∈ A ∪ A/ 0, which means

that x00 ∈ U ∩ U0, which is therefore not empty.

As before, let V be a n-dimensional vector space over K with basis e1, . . . , enand

E the exterior algebra over V . An element of the general linear group α = (aij)ij ∈

GL(n; K) represents an automorphism of V : α( n X i=1 biei) = n X j=1 n X i=1 aijbi ! ej. Denote fi := α(ei) = Pn

j=1aijej; α induces a K-algebra automorphism of E defined

on monomials by

ei1 ∧ · · · ∧ eik 7→ fi1 ∧ · · · ∧ fik,

which we will again call α.

GL(n; K) inherits the Zariski Topology: it is a subset of Mn(K) ∼= Kn×n, the set

(22)

it consists of all those matrices α = (aij) with det α 6= 0. Since the determinant is

a polynomial in the n2 variables corresponding to the entries of the matrix, the set

of zeros of the determinant is a Zariski closed set and its complement, GL(n; K), is open. In particular a subset of GL(n; K) is open if and only if it is a Zariski open subset of Kn×n.

Theorem 1.4.16. Let J ⊆ E be a graded ideal, and < a monomial order on E (with our usual assumption e1 > e2 > · · · > en). Then there exists a nonempty open subset

U ⊆ GLn(K) such that in<(α(J )) = in<(α0(J )) for all α, α0 ∈ U .

This theorem motivates the definition of the generic initial ideal:

Definition 1.4.17. The ideal in<(α(J )) for α ∈ U in the theorem is called the

generic initial ideal of J with respect to <, and it is denoted by gin<(J ).

Recall that since U is open, it is dense in GL(n; K). The name generic is therefore justified by the informal remark that under ”most” changes of coordinates the initial ideal is the same.

The proof of theorem 1.4.16 is a bit technical and will require some preparation. Let d, t ∈ N with t ≤ dimKEd, where Edis the d-th graded component of the exterior

algebra E. We will work with the t-th exterior power Vt

Ed of the K-vector space

Ed.

A K-basis of Ed are all monomials of degree d; once we fix a monomial order on

E, a monomial of Vt

Ed will be an element of the form u1∧ u2∧ · · · ∧ ut, where each

ui is a monomial of Ed and u1 > u2 > · · · > ut.

On Vt

Ed we consider the lexicographic monomial order:

u1∧ u2∧ · · · ∧ ut> v1∧ v2∧ · · · ∧ vt

if, for some i, ui > vi and uj = vj for all j < i. In this way the notion of initial

monomial in<(f ) is defined for elements f ∈VtEd: it is the largest monomial in the

support of f .

Let V ⊆ Ed be a t-dimensional K-subspace and f1, . . . , ft a K-basis of V . Then

Vt

V is a 1-dimensional K-vector space generated by f1∧ · · · ∧ ft. in<(V ) will denote

the K-vector space generated by all the monomials in<(f ) with f 6= 0, f ∈ V .

Lemma 1.4.18. Let V be defined as above and w1 < · · · < wt be monomials in Ed.

Then the following are equivalent

(23)

(b) if g1, . . . , gt ∈ V are such that in<(gi) = wi for each i, then {g1, . . . , gt} is a K-basis of V and in<(g1∧ · · · ∧ gt) = in<(g1) ∧ · · · ∧ in<(gt)in t ^ V ; (c) if {f1, . . . , ft} is a K-basis of V then in<(f1∧ · · · ∧ ft) = w1∧ · · · ∧ wt.

Proof. (a) =⇒ (b): Let f ∈ V be a nonzero element. Then by (a) in<(f ) = wi

for some i. Thus there is some a ∈ K such that f − agi either is equal to 0 or has

initial monomial strictly smaller than wi. Repeating this procedure for f0 = f − agi

will yield (after a finite number of steps, since there are finitely many wi) that f is a

K-linear combination of g1, . . . , gn. Thus these elements span the K-vector space V .

To show that they are linearly independent, suppose thatPt

i=1aigi = 0 for some

ai ∈ K not all 0. Then if j is the smallest index such that aj 6= 0,

in<( t X i=1 aigi) = wj 6= 0, which is a contradiction. in<(g1) ∧ · · · ∧ in<(gt) = w1∧ · · · ∧ wt6= 0, since w1 > · · · > wt, so it is a monomial

of the support of g = g1∧ · · · ∧ gt. Now in<(g) is a monomial of the form u1∧ · · · ∧ ut

with each ui in the support of gi. Since wi ≥ ui for each i,

in<(g) ≤ w1∧ · · · ∧ wt.

But since this is an element of the support, equality must hold.

(b) =⇒ (c): Since f1∧ · · · ∧ ft and g1∧ · · · ∧ gt differ only by a nonzero constant,

in<(f1∧ · · · ∧ ft) = in<(g1∧ · · · ∧ gt) = w1∧ · · · ∧ wt.

(c) =⇒ (a): Since for any other K basis h1, . . . , ht of V we have

in<(f1∧ · · · ∧ ft) = in<(h1∧ · · · ∧ ht),

we may assume that in<(f1) > in<(f2) > · · · > in<(ft).Then wi = in<(fi), and

therefore wi ∈ in<(V ). Since w1, . . . , wt are linearly independent and t = dimKV =

(24)

Lemma 1.4.19. Let V be a t-dimensional subspace of Ed and f1, . . . , ft a K-basis

of V . Denote by w1∧ · · · ∧ wt the largest monomial in VtEd such that there exists

an automorphism α ∈ GL(n; K) with

in<(α(f1) ∧ · · · ∧ α(ft)) = w1∧ · · · ∧ wt.

Then the set U = {α ∈ GL(n; K)| in<(α(f1) ∧ · · · ∧ α(ft)) = w1 ∧ · · · ∧ wt} is a

nonempty Zariski open subset of GL(n; K).

Proof. U 6= ∅ by definition. Observe that, since no monomial larger than w1∧· · ·∧wt

in Vt

Ed can be initial module of α(f1) ∧ · · · ∧ α(ft) for some α, α ∈ U if and only

if the coefficient p(α) of w1 ∧ · · · ∧ wt in the expansion of α(f1) ∧ · · · ∧ α(ft) into

monomials is nonzero. The conclusion follows by observing that p(α) is a polynomial in the entries of α with coefficients determined by f1, . . . , ft.

Observation 1.4.20. Consider a linear automorphism α ∈ GL(n; K). If f1, . . . , ft is

a K-basis of V , then α(f1), . . . , α(ft) is a K-basis of α(V ). By the lemma 1.4.18, if

w1, . . . , wt are monomials in Ed such that in<(α(f1) ∧ · · · ∧ α(ft)) = w1 ∧ · · · ∧ wt

then {w1, . . . , wt} is a K-basis of in<(α(V )). So for any α, α0 ∈ U , where U is the

Zariski open set defined in lemma 1.4.19, both in<(α(V )) and in<(α0(V )) have the

same K-basis {w1, . . . , wt}.

Therefore, for any α, α0 ∈ U ,

in < (α(V )) = in<(α0(V )).

With the tools we have developed, we can give a proof of theorem 1.4.16: Proof. Let J ⊆ E be a graded ideal, J = Ln

d=0Jd. Apply lemma 1.4.19 for V =

Jd 6= 0 to obtain a nonempty Zariski open set Ud ⊆ GL(n; K). If Jd = 0 set

Ud= GL(n; K). For each d choose αd∈ Ud and define

˜

Jd= in < (αd(Jd)).

By observation 1.4.20, this does not depend on the choice of αd∈ Ud. Now let

˜ J = n M d=1 ˜ Jd.

This is an ideal of E. Recall that Ud∩ Ud+1 6= ∅; then for any d and α ∈ Ud∩ Ud+1

(25)

which shows that ˜J is an ideal.

Now we would like to show that there is a Zariski open set U ⊆ GL(n; K) such that ˜J = in<(α(J )) for all α ∈ U , or equivalently, for all d, for all α ∈ U ,

˜

Jd= in<(α(Jd)).

Consider a set of generators for ˜J and let ¯d be the maximum degree of any generator. Define U = U1∩ · · · ∩ Ud¯. To show this U has the property we are looking

for we use induction on d. The base of our induction will be d ≤ ¯d: by definition of U we have ˜Jd= in<(α(Jd)) for all α ∈ U . Now let d > ¯d; we have, for all α ∈ U ,

˜

Jd= E1∧ ˜Jd−1= E1∧ in<(α(Jd−1)) ⊆ in<(α(Jd)), (1.2)

as desired, where the first equality holds because all generators of ˜J are of degree < d, and the second by induction hypothesis.

Now observe that

dimK( ˜Jd) = dimK(αd(Jd)) = dimK(α(Jd))

because α, αd∈ GL(n; K) are automorphisms. Thus equality must hold in 1.2, and

the theorem follows.

The proof of theorem 1.4.16 actually yields a bit more information about the generic initial ideal, summarized in the following:

Corollary 1.4.21. Let as before J ⊆ E be a graded ideal. If t = dimKJd and

w1∧ · · · ∧ wt is the monomial generating Vtgin<(J )d, then

w1∧ · · · ∧ wt= max{in<(f )|0 6= f ∈ t

^

α(J )d, α ∈ GL(n; K)}.

We have therfore proven the existence of generic initial ideals. They will be of key importance in the following chapter, when we will talk about the Kruskal-Katona theorem, because of the stability properties that we will show in the next section, and because of the following simple observation:

Proposition 1.4.22. The generic initial ideal gin<(J ) of a graded ideal J has the same Hilbert function as J .

Proof. We have seen that dimKJi = dimKin<(J )i, which implies that the initial ideal

(26)

α ∈ GL(n; K), the K-vector spaces Ji and α(Ji) have the same Hilbert function

because α maps K-basis to K-basis. Thus

dimKgin<(Ji) = dimKin<(α(Ji)) = dimKα(J )i = dimKJi,

as desired.

Of course this is equivalent to saying that the Hilbert functions of the quotients are equal:

HE/J(t) = HE/ gin<(Ji)(t).

1.4.5

Stability properties of generic initial ideals

The upper elementary matrices in GL(n; K) are those matrices α = (aij) if aii= 1

for all i and away from the diagonal there is a unique nonzero element akl, and k < l

holds. The subgroup B ⊆ GL(n; K) of all nonsingular upper triangular matrices is called the Borel subgroup of GL(n; K) and it is generated by all nonsingular diagonal matrices together with all upper elementary matrices.

We will show that generic initial ideals are stable under the action of the Borel subgroup. Precisely:

Definition 1.4.23. A graded monomial ideal J ⊆ E is Borel-fixed if it is fixed under the action of the Borel subgroup, that is,

α(J ) = J, for all α ∈ B.

Theorem 1.4.24. Let J ⊆ E be a graded ideal and < a monomial order. Then gin<(J ) is a Borel-fixed ideal.

Proof. Observe first that invertible diagonal matrices keep monomial ideals fixed, because if d1, . . . , dn are the elements of a diagonal matrix δ,

δ(ei1 ∧ ei2 ∧ · · · ∧ eik) = k Y j=1 dij ! ei1 ∧ ei2 ∧ · · · ∧ eik,

and since δ is invertible the product Qk

j=1dij is nonzero.

Suppose now for contradiction that there is an element α ∈ B such that α(gin<(J )) 6= gin<(J ). Since B is generated by nonsingular diagonal and upper elementary matri-ces, and the diagonal matrices keep monomial ideal fixed, we may assume that α is an upper elementary matrix.

(27)

There is thus some d ∈ Z+ such that α(gin<(J )d) 6= gin<(J )d. Let t = dimKJd=

dimKgin<(J )d. There is a natural action of α on VtEd as a K-linear map given by

α(w1∧ · · · ∧ wt) = α(w1) ∧ · · · ∧ α(wt)

for all monomials w1∧ · · · ∧ wt of VtEd.

Let w1, . . . , wt be a K-basis of gin<(J )d. Then w = w1∧ · · · ∧ wt is a monomial

generating Vt

Ed and since α(gin<(J )d) 6= gin<(J )d and α is an upper triangular

matrix it follows that there is some monomial u ∈ Vt

Ed such that u ∈ supp(α(w))

and u > w.

Let β ∈ GL(n; K) such that gin<(J ) = in<(β(J )), and let f1, . . . , ft be a k basis

of β(Jd) with in<(fi) = wi. Define

f = f1∧ · · · ∧ ft.

We will reach a contradiction by showing that u ∈ supp(γ(f )) for a suitable γ ∈ B, which is in contrast with corollary 1.4.21.

We will construct the element γ ∈ B to be of the form αδ, where δ is some diagonal matrix. Applying a diagonal matrix to f will map all monomials v = v1∧ · · · ∧ vt ∈

supp(f ) to scalar multiples of themselves. We group the monomials of the support of f according to which scalar m = m(d1, . . . , dn) they will be multiplied by under

the action of δ. We can thus write

f = fm0 +

X

m6=m0

fm,

where fm0 = aw1∧ · · · ∧ wt= a in<(f ), a ∈ K \ {0}. Its image under δ will then be

δ(f ) = am in<(f ) +

X

m6=m0

mfm.

Now we apply α and obtain

(αδ)(f ) = amα(in<(f )) +

X

m6=m0

mα(fm).

Since u appears in α(in<(f )) with some coefficient 0 6= c ∈ K, the coefficient of u in

(αδ)(f ) is

acm0+

X

m∈W

(28)

where cm ∈ K and W is the set of all m for which u ∈ supp(α(fm)). Now let

p = acm0 +Pm∈Wcmm; this is a nonzero polynomial in E (recall that the scalars

m, m0 are functions of d1, . . . , dn), and since K is infinite, there exist d1, . . . , dn such

that p(d1, . . . , dn) 6= 0. Thus if δ is the diagonal matrix with diagonal (d1, . . . , dn)

and γ = αδ, we have u ∈ supp(γ(f )), which is our contradiction.

The stability property which we will truly be interested in later is the following: Definition 1.4.25. A monomial ideal J ⊆ E is called strongly stable if for each monomial eF ∈ J and each j ∈ F , i < j,

eF \{j}∪{i} ∈ J.

In other words in a strongly stable ideal, one can substitute any variable of a monomial with a larger one and the resulting monomial will still belong to the ideal. We will use the fact that generic initial ideals are Borel fixed to show that they are also strongly stable:

Theorem 1.4.26. The generic initial ideal of a graded ideal J ⊆ E is strongly stable. Proof. Suppose for contradiction gin<(J ) is not strongly stable. Then there is a monomial eF ∈ gin<(J ) and indices j ∈ F , i < j such that ei ∧ eF \{j} ∈ gin/ <(J ).

Consider the element of the Borel subgroup α ∈ B defined by α(ej) = ei + ej and

α(ek) = ek for all k 6= j. We have

α(eF) = (ei+ ej) ∧ eF \{j} = eF + ei∧ eF \{j},

hence it does not belong to gin<(J ). This is a contradiction because by theorem 1.4.24, gin<(J ) is a Borel-fixed ideal.

(29)

The Kruskal-Katona Theorem

Which vectors in Zd

≥0 are f -vectors of some simplicial complex?

This is a very natural question to ask on encountering simplicial complexes. In this chapter we will study the Kruskal-Katona theorem, which gives a complete characterization of f -vectors of simplicial complexes.

It is a fundamental theorem of Extremal Set Theory, and we will first give a combinatorial proof of it, following the proof idea presented in [5], and then an algebraic proof, which is outlined in [6].

The main idea behind these proofs is to build a special class of simplicial com-plexes (called compressed simplicial comcom-plexes), and in the algebraic setting a special class of ideals, which are easier to work with, but which realize all possible f -vectors.

2.1

Kruskal-Katona from a combinatorial

perspec-tive

In the following we denote by N

k the family of all subsets of N of size k. Note that

we will always denote by N the set of strictly positive integers, while we will use Z≥0

for the nonnegative integers.

For a simplicial complex ∆, we call ∆k the set of faces of a fixed dimension k.

Thus the condition that ∆ is a simplicial complex can be rewritten as ∂∆k ⊆ ∆k−1

∀k ≥ 1. Recall that the boundary ∂F of a family F of k-sets is the family of all the (k − 1)-sets that are subsets of some set in F .

Before we can talk about the Kruskal-Katona theorem, we need to define a par-ticular simplicial complex, the compressed simplicial complex. To do so, we intro-duce the reverse lexicographic order (r-lex order) on k-subsets of N, where for any

(30)

A, B ⊆ N of size k,

A ≺ B

if and only if A 6= B and the largest element in which A and B differ is in B. For example if k = 3 the first elements of N

k are

123 ≺ 124 ≺ 134 ≺ 234 ≺ 125 ≺ 135 ≺ 235 ≺ 145 ≺ 245 ≺ 345 ≺ 126 ≺ . . . Now a compressed simplicial complex is a simplicial complex where, for each di-mension k, ∆k is an initial segment in the lexicographic order of k+1N , that is, the

k-dimensional faces of ∆ are the first elements of N

k+1 in the r-lex order.

Lemma 2.1.1 (k-binomial representation). Given positive integers m and k, there exists a unique way of writing m in the form

m =ak k  + ak−1 k − 1  + · · · +aj j 

for some integers ak > ak−1 > · · · > aj ≥ j ≥ 1. This is called the k-binomial

representation of m.

Proof. We fist prove the existence of such a representation. We use the convention that a binomial nk is zero if n < k.

Define for each i ≤ k ai = max  a ∈ N|m ≥ ak k  + ak−1 k − 1  + · · · + ai+1 i + 1  +a i  Then ai > ai−1. If not, then by definition of ai−1

m ≥ ak k  + ak−1 k − 1  + · · · +ai i  + ai−1 i − 1  ≥ak k  + ak−1 k − 1  + · · · +ai i  +  ai i − 1  =ak k  + ak−1 k − 1  + · · · +ai+ 1 i  , which contradicts the maximality of ai.

As for uniqueness, suppose there are two representations: m =ak k  + ak−1 k − 1  + · · · +aj j 

(31)

and m =bk k  + bk−1 k − 1  + · · · +br r  , and suppose that ak> bk. Then

m =bk k  + bk−1 k − 1  + · · · +br r  ≤bk k  +bk− 1 k − 1  + · · · +bk− k + r r  + · · · +bk− i + 1 1  =bk+ 1 k  − 1. This yields m < bk+1 k  ≤ ak k, which is a contradiction.

Another way to prove the previous lemma, which is perhaps more insightful, is through the following observation:

Lemma 2.1.2. Let Am+1 be the (m + 1)-th element of Nk in reverse lexicographic

order, and suppose Am+1 = {a1 + 1, a2 + 1, . . . , ak+ 1}, with a1 < a2 < · · · < ak.

Then, using the convention that nk = 0 if n < k, m =ak k  + ak−1 k − 1  + · · · +a1 1  . Proof. The family of sets F = {B ∈ N

k|B ≺ Am+1} can be partitioned as

F = Fkt Fk−1t · · · t F1,

where Fk is the family containing all k-sets B ⊆ [ak], so those k-sets that only

contain elements ≤ ak, and for a general i, Fi contains all k-sets B of the form

B = {ak+ 1, . . . , ai+1+ 1} ∪ B0, where B0 ⊆ [ai] of size i.

Observe that |Fi| = aii.

Thus there are ak

k + ak−1

k−1 + · · · + a1

1 k-sets smaller than Am, and the thesis

follows (with the convention that we number the sets of N

k starting from 0).

A simple consequence of this lemma is a characterization of the boundary of an initial segment (in the r-lex order):

(32)

Lemma 2.1.3. Let F be an initial segment in N

k, and let ∂F denote the boundary

of F . If |F | = m = ak

k + ak−1

k−1 + · · · + aj

j is the k-binomial expansion of m with

ak > · · · > aj ≥ j, then |∂F | =  ak k − 1  + ak−1 k − 2  + · · · +  aj j − 1 

Proof. Actually, we will show more, that is, we will say exactly which sets belong to the boundary. Observe that given m as above, the (m + 1)-th set in the reverse lexicographic order is

Am+1 = {a1+ 1, . . . , ak+ 1} = {1, 2, . . . , j − 1, aj+ 1, . . . , ak+ 1}.

In lemma 2.1.2 we explicitly said which sets are smaller than Am+1. Analogously

we can partition ∂F into those (k − 1)-sets which do not contain elements greater or equal to ak+ 1, of which there are k−1ak , those (k − 1)-sets which contain ak+ 1

but have all other elements smaller than ak−1 + 1, of which there are ak−2k−1, and

so on, with the last sets to be added being those (k − 1)-subsets which contain ak+ 1, . . . , aj+1+ 1 and elements smaller than aj, which are j−1aj .

Observe that these sets are all sets smaller than B = {1, 2, . . . , j−2, aj+1, . . . , ak+

1} in the reverse lexicographic order, and thus ∂F is also an initial segment.

In the proof of the lemma we saw that the boundary of an initial segment is also an initial segment. Therefore

Observation 2.1.4. If F ⊆ 2[n] is a compressed family of sets, then its boundary ∂F is also compressed.

For convenience we introduce the following notation: Definition 2.1.5. If m is an integer, and m = ak

k + ak−1 k−1 + · · · + aj j  is its k-binomial expansion, then we define the operator ∂k evaluated at m by

∂k(m) =  ak k − 1  + ak−1 k − 2  + · · · +  aj j − 1 

The strategy to find a characterization of f -vectors will be to show that, given any simplicial complex ∆ with f -vector f = (f0, . . . , fd), there exists a (unique)

compressed simplicial complex with the same f -vector f . Dealing with compressed complexes will then be easy. To this end, we define the compression operator:

(33)

Definition 2.1.6. Given a family of k-sets F , its compression CF is the family of k-sets given by the first |F | k-sets in reverse lexicographic order.

If ∆ is a simplicial complex, then its compression C∆ is the family of sets obtained by applying the operator C defined above separately to ∆i, the family of faces of

dimension i, for each i.

Observe that we do not know, at this stage, whether the compression of a sim-plicial complex is itself a simsim-plicial complex: this will actually be the crucial step of the characterization, and we will prove that the compression is indeed a simplicial complex in Theorem 2.1.8.

We can finally state the Kruskal-Katona theorem:

Theorem 2.1.7 (Kruskal-Katona). Let f = (f0, . . . , fd) be a vector of positive

inte-gers. Then the following are equivalent:

1. There exists a simplicial complex ∆ with f (∆) = f ; 2. For each 0 ≤ k ≤ d, ∂k(fk) ≤ fk−1;

3. If ¯∆ is the set family obtained by choosing, for each 0 ≤ k ≤ d, ¯∆k to be the

first fk sets of Nk in r-lex order, then ¯∆ is a simplicial complex.

Proof. Based on our previous discussion of initial segments, it is easy to show that 2 and 3 are equivalent: ¯∆ is a simplicial complex if and only if, for all k, ∂ ¯∆k⊆ ¯∆k−1,

but (since we are dealing with initial segments) this is equivalent to saying that the cardinality of the first family must not be greater than that of the second family, that is ∂k(fk) ≤ fk−1.

It is obvious that 3 implies 1, since ¯∆ has f -vector equal to f (∆).

It remains to be shown that 1 implies 3, which is the interesting direction of the proof. This will follow from observation that ∂CF ⊆ C∂F whenever F is a family of k-sets. This is Theorem 2.1.8, which we will prove below.

Indeed, if ∆ is a simplicial complex as in 1, observe that its compression C∆, by definition, is exactly the set family ¯∆ as defined in 3. This is indeed a simplicial complex, since for every 0 ≤ k ≤ d we have:

∂C∆k ⊆ C∂∆k⊆ C∆k−1,

where the second containment follows from the fact that ∆ is a simplicial complex, while the first is a direct application of Theorem 2.1.8.

(34)

The main idea behind the Kruskal-Katona theorem is to show that, associated to each simplicial complex ∆, there is a canonical simplicial complex which depends only on the f -vector. We will see that the algebraic proof will follow the same core idea. In the proof above we saw that this key idea followed from:

Theorem 2.1.8. Let F be a family of k-sets. Then ∂CF ⊆ C∂F .

Actually, Theorem 2.1.8 is equivalent to the statement that, for any simplicial complex ∆, its compression C∆ is also a simplicial complex: to see that this state-ment implies Theorem 2.1.8, suppose by contradiction there is a family F of k-sets for which ∂CF * C∂F. Then let ∆ be the simplicial complex with facets F, that is, ∆ is (k − 1)-dimensional and ∆k−1 = F , ∆k−2 = ∂F etc. This is a simplicial

complex, but its compression is not:

∂C∆k = ∂CF * C∂F = C∆k−1,

which then contradicts our original statement. We will now prove the key theorem 2.1.8:

Proof. It will be easier to prove the following statement, which is equivalent to the above formulation of the theorem:

If F ⊆ N k, G ⊆

N

k−1, and ∂F ⊆ G, then δCF ⊆ CG.

Let n ∈ N be the largest integer which belongs to some set in F or G. For each i, 0 ≤ i ≤ n, we define an operator Ci, called the i-compression: let k−1[n]



i be the

family of all k-sets on ground set [n] which contain i, and k−1[n]

i its complement, that

is, the family of all k-sets which do not contain i. The lexicographic order induces by restriction an order on these families. We define the i-compression CiF of a family

of k-sets F to be the set family containing the first |F ∩ k−1[n]

i| of [n] k−1



i and the first

|F ∩ k−1[n]

i| sets of [n] k−1



i in these orders. We say that a family F is i-compressed

if CiF = F .

The proof will proceed by induction on n. Note that, by definition, |CiF | = |F |.

The first step of the theorem is to prove that if F ⊆ [n]k, G ⊆ k−1[n], and ∂F ⊆ G, then ∂CiF ⊆ CiG; this follows from the inductive hypothesis, as we will now see.

Observe that k−1[n]

i with the restricted r-lex order is isomorphic (as an ordered set)

to [n−1]k−1 with the r-lex order, since i is a fixed element present in all the sets, and similarly [n]k

i is isomorphic to [n−1]

k . Let

(35)

where F0 are the sets of F not containing i and F10 are the sets of F containing i.

Let also F1 = {F \ {i} ∈ [n]\{i}k−1 |F ∈ F10}, which is isomorphic to F 0

1, in the sense

mentioned above. Then

∂F = (∂F0∪ F1) t (∂F1× {i}) ,

where ∂F0 ∪ F1 contains those sets in the boundary of F not containing i and

∂F1× {i} consists of the sets containing i. Thus Ci∂F contains the first |∂F0∪ F1|

sets not containing i in r-lex order and the first |∂F1× {i}| sets containing i in r-lex

order.

Now observe that, since CiF = CiF0t CiF10,

∂CiF = (∂CiF0∪ CiF1) t (∂CiF1× {i}) ,

where just as before, the first parenthesis is the family of those sets of the boundary not containing i and the second of those containing i. These two families are obviously disjoint.

We now apply the induction hypothesis to obtain that ∂CiF0 ⊆ Ci∂F0,

∂CiF1 ⊆ Ci∂F1.

We can apply the inductive hypothesis because we have seen that F0 and F1 are

iso-morphic to families contained respectively in [n−1]k  and [n−1]k−1 and because applying the i-compression in this isomorphism translates to standard compression.

Then

|∂CiF1 × {i}| = |∂CiF1| ≤ |Ci∂F1| = |∂F1|,

where the latter number is the number of sets containing i in Ci∂F ; and since ∂CiF0

and CiF1 are both initial segments,

|∂CiF0∪ CiF1| = max{|∂CiF0|, |CiF1|} ≤ |∂F0 ∪ F1|,

where the last number is the number of sets in Ci∂F containing i. Together these

two inequalities yield

∂CiF ⊆ Ci∂F ⊆ CiG,

as desired.

The second step of the theorem is to prove that by successively applying i-compressions to F and G for various i, we will obtain sets F0 and G0 which are

(36)

i-compressed for all i. To show this, we follow the very simple algorithm of, at every step, i-compressing with i chosen to be the minimum i for which the family is not already i-compressed. Only a finite number of steps are needed: if a family of k-sets is not i-compressed, it means that by applying the i-compression some exchange takes place, with a smaller (in rev-lex order) set previously not in the family being substituted to a bigger set which was in the family. This can only happen a finite number of times, since there are only a certain number of sets smaller than those in our original family, and thus in a finite number of steps we will have a family which is i-compressed for all i. Thus, by the first step, we have ∂F0 ⊆ G0.

Except for one case, if 2k = n + 1, we are now able to conclude: if n + 1 6= 2k, a family H0 which is i compressed for all 0 ≤ i ≤ n is also compressed. Indeed, suppose that H0 is a family which is i-compressed for all i but not compressed. Let X ∈ H0 denote the largest set of H0 in the rev-lex order, and let Y ∈ [n]k

be any set which is smaller than X and is not in H0 (one such set must exist if H0 is not compressed). Then Y and X cannot have any element in common, else, if j is a common element, H0 is not j-compressed, because the j-compression would substitute X with Y . Neither can they both not contain some element k, else by the same token H0 would not be k-compressed. Thus X and Y must be complementary sets, and since both are k-sets contained in [n], 2k = n + 1 must hold. So if we are in the case n + 1 6= 2k, F0 and G0 are compressed and we are done.

Finally, if 2k = n + 1, we can modify the family F0 (and analogously G0), i-compressed for all i, into a family F00 which is compressed: as we observed above, the only element smaller than X that can be not in F00is its complement Y = [2k]\X. In this case, it is thus clear that F00= F0 {X} ∪ {Y } is compressed and the relation ∂F00 ⊆ G00 still holds, which conludes the proof.

2.2

Kruskal-Katona from an algebraic perspective

Recall that, for a simplicial complex ∆, we defined the exterior face ring of ∆ as K{∆} = E/J∆, where E is the exterior algebra of the K/vector space V =Lni=1Kei

and J∆⊆ E is the graded ideal generated by all exterior monomials eF = ei1 ∧ ei2 ∧

· · · ∧ eik such that F = {i1 < i2 < · · · < in} /∈ ∆.

We have observed that K{∆} is a graded K-algebra and that, if f = (f0, f1, . . . , fd)

is the f -vector of ∆ and f−1= 1, then

dimKK{∆}i = fi−1.

(37)

HK{∆}(t) = n

X

i=0

fi−1ti.

Finding a characterization of f -vectors of simplicial complexes is therefore equiv-alent to determining possible Hilbert functions of graded algebras of the form E/J , where J is any graded ideal. To see this, the following two observations are crucial. First, note that there is a bijection between monomial ideals of the exterior algebra and simplicial complexes, given by

∆ ←→ J∆,

unlike the case of the symmetric algebra, where there is a one to one correspondence between squarefree monomial ideals and simplicial complexes. Of course, this is because the exterior algebra is itself squarefree.

Moreover, recall that given any graded ideal in the exterior algebra, its initial ideal (with respect to any monomial order) has the same Hilbert function.

We will always assume that the base field is infinite; otherwise we can choose an extension of the field. Let J ⊆ E be a graded ideal. Not only have we seen that the initial ideal has the same Hilbert function, but also the generic initial ideal. Recall that the generic initial ideal is a strongly stable ideal. Thus HE/J = HE/gin(J ) and

we may assume that J itself is strongly stable. When we utilize a monomial order on the monomials of the exterior algebra, we will always consider the lexicographic order induced by e1 > e2 > · · · > en, indicated by ≤lex.

Denote by Monj(E) the set of all degree j monomials of E. Then

Definition 2.2.1. A subset L ⊆ Monj(E) is a lexsegment if there is an element

u ∈ L such that L = {v ∈ Mond(E)|v ≥lexu}.

L ⊆ Monj(E) is called strongly stable if for all u ∈ L , all k such that ek|u and

all i < k, one has ei∧ (u/ek) ∈ L.

For any monomial u ∈ Monj(E), let m(u) be the maximum index i such that

xi|u. Then a subset L ⊆ Mond(E) is stable if ei ∧ (u/em(u)) ∈ L for all u ∈ L and

all i < m(u).

A monomial ideal J is called a lexsegment ideal or a (strongly) stable mono-mial ideal, if for each j the set Lj = Jj∩ Monj(E) of monomials of J of degree j is

a lexsegment or (strongly) stable set.

Clearly lexsegment implies strongly stable which in turn implies stable; all these implications are strict.

(38)

Observe that ∆ is a compressed simplicial complex if and only if J∆ is a

lexseg-ment ideal.

Indeed, the algebraic proof of the Kruskal-Katona theorem will have the same structure as the combinatorial one: for any graded monomial ideal J , we will try to build a lexicographic ideal with the same Hilbert function. There is only one possible candidate, which is Jlex= d M i=0 Jjlex, where Jlex

j is the K-vector space spanned by the lexicographic segment Lj of length

|Lj| = dimKJj. The key to the Kruskal-Katona will be to show that this is indeed

an ideal, just as in the combinatorial setting it was to show that the compression of a simplicial complex is a simplicial complex.

Definition 2.2.2. For any N ⊆ Mon(E), we call the set

Shad(N ) = {e1, e2, . . . , en} ∧ N = {ei∧ u 6= 0|u ∈ N , i = 1, . . . , n}

shadow of N .

Observe that this is the opposite, in an informal sense, of the boundary operator for simplicial complexes.

Since Ilex

j is generated by the monomials Lj, it is clear that Ilex is an ideal if and

only if

ShadLj ⊆ Lj+1.

As we said, this is the crucial step. To prove it, we will need two lemmas. Definition 2.2.3. For N ⊆ Monj(E) and an integer i ∈ [n] we let

mi(N ) = |{u ∈ N : m(u) = i}|

and m≤i = i X k=1 mk(N ).

where the number m(u), given in the previous definition, is the largest index of a variable which divides u.

These numbers allow us to calculate the length of the shadow of a stable set, as shown in the following:

(39)

Lemma 2.2.4. Let N ⊆ Monj(E) be a stable set of monomials. Then its shadow is

also a stable set and the following hold: • mi(Shad(N )) = m≤i−1(N ),

• | Shad(N )| = Pn−1

i=1 m≤i(N ).

Proof. The second statement is a consequence of the first. Let us then prove the first statement. We show that there is a bijection

{u ∈ N : m(u) ≤ i − 1} −→ {u ∈ Shad(N ) : m(u) = i}, given by

u 7−→ u ∧ ei.

The map is injective by definition; to prove surjectiveness, let v ∈ Shad(N ) be such that m(v) = i. Thus there must exist u ∈ N such that v = ek ∧ u for some k,

k ≤ i since the maximum index of a variable appearing in v is i. If k = i, we are done. If k < i, m(w) = i and we can apply the stable property of N , and thus u = ek∧ (w/em(w)) ∈ N . Now v = ei∧ u

The following lemma tells us that the shadow of lexsegments is minimum among strongly stable sets of a fixed cardinality.

Lemma 2.2.5. Let N ⊆ Monj(E) be a strongly stable set and L ⊆ Monj(E) a

lexsegment, with |L| = |N |. Then for all i = 1, 2, . . . , n m≤i(L) ≤ m≤i(N ).

Proof. The proof will be by induction on n, the number of variables. Observe that we can write (uniquely)

N = N0 ∪ N00∧ en,

where N0 and N00 are strongly stable sets of monomials in the variables e1, . . . , en;

there is an analogous decomposition for L,

L = L0∪ L00∧ en,

where L0 and L00 are lexsegments in the variables e1, . . . , en.

The statement m≤i(L) ≤ m≤i(N ) is trivial for i = n, since m≤n(N ) = |N | and

(40)

While if we set i = n − 1, we have m≤n−1(N ) = |N0| and m≤n−1(L) = |L0|, since

N0 and L0 contain exactly those monomials which do not have e

n as a factor. Thus

the inequality we want to prove becomes

|L0| ≤ |N0|. (2.1) Once we have proven inequality 2.1, we are done: we can apply the induction hypothesis to L0 and N0, and thus for all i ≤ n − 1,

m≤i(L) = m≤i(L0) ≤ m≤i(N ) = m≤i(N0),

since we have m≤i(N ) = m≤i(N0) and m≤i(L) = m≤i(L0) (again because L0 and N0

contain all monomials which do not have as factor en).

Thus it remains to prove 2.1. Observe that we may assume that N0 and N00 are lexsegments. If they are not, we replace them respectively by N∗, N∗∗, where these are lexsegments in the variables e1, . . . , en−1 of length |N∗| = |N0| and |N∗∗| = |N00|

respectively. We can now show that ˜N = N∗ ∪ N∗∗ is again strongly stable. To

do this, it is enough to show that {e1, . . . , en−1} ∧ N∗∗ ⊆ N∗; and since these are

lexsegments, it is enough to show that the cardinality of the left hand side is less than that of the right hand side. We apply our induction hypothesis and the previous lemma: |{e1, . . . , en−1} ∧ N∗∗| = n−2 X i=1 m≤i(N∗∗) ≤ n−2 X i=1 m≤i(N00) =|{e1, . . . , en−1} ∧ N00| ≤ |N0| = |N∗|.

Given any monomial u let ¯u = u if u does not contain the variable ei, while if

u = u0∧ en, let ¯u = u0∧ ek (adjusting for signs so that ¯u is positive), where k is the

largest integer strictly less than n such that k /∈ supp(u0). This assignment is order

preserving, that is, if u ≤ v then ¯u ≤ ¯v.

Now let u = minN , v = minL. Since N is strongly stable, ¯v ∈ N0. Thus min(N0) ≤ ¯v. On the other hand, min(N0) ∈ N , and so min(N0) ≥ v. Again because of order preservation, we have min(N¯ 0) ≥ ¯v. Thus min(N0) = ¯v. Similarly

min(L0) = ¯u.

We conclude by observing that L is a lexsegment and |N | ≥ |L| by assumption, and so u ≥ v. Thus

(41)

We are now done since both L0 and N0 are lexsegments.

With these two lemmas at hand, we are able to prove the algebraic analogue of key idea of the combinatorial proof of Kruskal-Katona, that the compression of a simplicial complex is again a simplicial complex:

Theorem 2.2.6. For any graded ideal J ⊆ E, there is a unique lexsegment ideal Jlex such that

HE/J(t) = HE/Jlex.

Proof. As previously said, it is enough to show that ShadLj ⊆ Lj+1

where for each i, Li is the lexsegment of monomials of degree i of length |Li| =

|dimKIi|. We have observe previously that we may assume J to be strongly stable

by taking generic ideals. Let Ni be a strongly stable set of monomials spanning Ji

for each i. Then |Li| = |Ni| and we can thus apply the previous lemmas:

|Shad(Lj)| = n X i=1 m≤i(Lj) ≤ n X i=1 m≤i(Nj) = |Shad(Nj)|.

Now since J is an ideal, ShadNj ⊆ Nj+1, and thus

|Shad(Lj)| ≤ |Shad(Nj)| ≤ |Nj+1| = |Lj+1|.

Both Shad(Lj) and Lj+1 are lexsegments, and the theorem follows.

In order to characterize Hilbert functions of quotients of the exterior algebra by graded ideals, it is thus sufficient to characterize quotients E/J where J is a lexicographic ideal. Recall the k-binomial expansion of an integer which we proved in the previous section:

Lemma 2.2.7 (j-binomial representation). For positive integers a and j, there exists a unique way of writing a in the form

a =aj j  + aj−1 j − 1  + · · · +ak k  for some integers aj > aj−1 > · · · > ak ≥ k ≥ 1.

(42)

As earlier for compressed complexes, we observe that lexicographic ideals are easy to deal with. For any positive integers i ≤ n and j, we denote by {ei, . . . , en}j the

set of all monomials of degree j in the variables ei, . . . , en. Recall that Lu denotes

the lexsegment of all monomials of a fixed degree greater than u.

Lemma 2.2.8. For u ∈ Monj(E), u = ek1 ∧ ek2∧ · · · ∧ ekj with k1 < k2 < · · · < kj,

Monj(E) \ Lu = j [ i=1 {eki+1, . . . , ekj} j−i+1 i−1 ^ r=1 ekr.

This is a disjoint union and therefore we can calculate the cardinality of the set: | Monj(E) \ Lu| = j X i=1 ai i  , (2.3) where ai = n − kj−i+1.

Proof. The proof is by induction on j, the degree of the monomial u. Observe that Monj(E) \ Lu = {ek1+1, . . . , ekj}

j ∪ (Mon

j−1(E) \ Lek2∧···∧ekj) ∧ ek1,

since the monomials v smaller than u either don’t contain any variable greater or equal to ek1, or they contain the variable ek1 and v/ek1 is smaller than u/ek1 in the

order of the monomials of degree j − 1.

The assertion then follows by applying the induction hypothesis. Similarly to what we did in definition 2.1.5, we define a new operator. Definition 2.2.9. If a is an integer, and a = aj

j + aj−1

j−1 +· · ·+ ak

k is its j-binomial

expansion, then we define the operator ∂j evaluated at a by ∂j(a) =  aj j + 1  +aj−1 j  + · · · +  ak k + 1  An important property of the operator is its monotonicity: Lemma 2.2.10. Let a ≥ b be positive integers. Then ∂j(a) ≥ ∂j(b).

Proof. Assume a > b, and let a = aj

j + aj−1 j−1 +· · ·+ at t, b = bj j + bj−1 j−1 +· · ·+ bk k  be their j-binomial expansions.

Recall the construction of the ai: we take aj to be a maximal integer such that

a ≥ aj

j, aj−1 to be a maximal integer such that a − aj

j ≥ aj−1

Riferimenti

Documenti correlati

Additionally, the dipole moment of trans-propargylamine was determined by Stark effect mea- surements (the values µ a = 0.586 D and µ b = 0.445 D being derived), but the very

o-regular functions from a topological space S to a finite directed Graph G (see Background we go on (see [3J,[4J and [5])with the stu- dy of normalization theorems for

Biomimetic architecture approach shall be expand in maintenance design aspect to produce better maintenance planning in the way of sustainable building envelope design. Furthermore,

Then we have studied the abundance gradients along the Galactic disk produced by our best cosmological model and their dependence upon several parameters: a threshold in the surface

Due to more complex SE spectra, the accuracy of the model describing the optical properties of silver nanoparticle arrays did not allow us to identify the surface premelting;

The main results of this thesis are a Law of Large Numbers, a Central Limit Theorem and an Invariance Principle for the position of the tagged particle.. The rst two, under

In Section 4 we prove Theorem 1.1 by combining the Pohozaev type identity with the results of our local blow-up analysis.. Section 5 is devoted to the computation of the

The argument used already in 10 yields that ϕ maps each affine line into an affine line, and the Affine Collineation Theorem implies that ϕ is a collineation of the affine space over V.