Advisor: Meera Sitharam*
CISE department, University of Florida
Gainesville, FL 32611-6120, USA.
We define natural geometric structures called equiseparations that capture the essence of key questions
in combinatorial geometry, complexity theory, and approximation from nonlinear spaces of functions. As a
direct consequence of their structure, an unusually diverse mix of techniques becomes natural in the study of
existence questions for equiseparations. Our results and proofs take the reader on a journey through oriented
matroids and Grassmanian spaces; finite Abelian groups and their representations; Hadamard matrices and
Walsh functions; and weight distributions of binary codes.
I The Problem
First we must define what an equiseparation is.
Definition: Let E = (P, H), where P is a set of points and H is a set of hyperplanes in the vector space
Rd, with IP n 1 0, |H m 1L 0. Then, E is an (n, m)-equiseparation if for any pair of points x, y E P,
x and y have exactly half the hyperplanes between them.
The structure of equiseparations makes certain questions about their existence difficult. The main ques-
tion that we have been considering is:
Given a number of points n, what is the least dimension d such that one can find an equiseparation of d
1.1 The (asymptotic) Conjecture
There is constant 0 < c < 1 and some infinite sequence of n's such that no equiseparation of n points exists
1.2 Results in this paper and Organization
In this paper, we go through some of the main results for the equiseparation problem. First we will devote
some time to analyzing different ways of approaching the problem. This involves going through various rep-
resentations of the problem that give useful structure for developing techniques to solve it.
The results that we present in this paper relate to some low dimensional results and some reformulations
for arbitrary dimensions. We also show some partial results from a Matlab program that may help with future
development of proof techniques that extend to arbitrary dimensions.
1.3 Preliminaries and Reformulations
Since the actual coordinates of the points and hyperplanes aren't important for the equiseparation problem,
we may represent an equiseparation in a form that only contains the relationship between them. This leads to
the concept of an equiseparation matrix which is defined as follows:
*Email: sitharam @cise.ufl.edu; Supported in part by NSF Grant CCR 94-09809.
Definition: The equiseparation matrix X has entries Xij 1 if the ith point is on the positive side of
the jth hyperplane, and -1 otherwise. If the points and hyperplanes represented by X are all in d-dimension,
then we say the equiseparation matrix X is realized in d dimensions.
This formulation, however, gives no immediate information about how the equiseparation is realized.
This leads to the question of whether there is a compact way to describe all "distinct" equiseparations.
Note that for any (n, m) equiseparation in a d dimensional space A, there must be an (n, m) equisep-
aration in a d + 1 dimensional space B with all the hyperplanes passing through the origin. Thus we can
consider the A the projective geometry for B corresponding to the plane z = 1 in B. Performing a linear
transformation on the hyperplane arrangement in B will give another realization of the same equiseparation
matrix. Specifically, this will be the same arrangement.
Therefore this gives a class of equivalent relations for a given equiseparation matrix called R(B, E). Note
that under this concept of equivalence R(B, E) = R(T(B), E) for all linear transformations T.
The next logical step is to determine a method of finding a well-defined and unique representative of the
class R(B, E). If we make the column vectors of a k x n matrix then taking all the linear transformations
of the columns corresponds to taking the rowspan of this matrix. This then gives a k - d subspace of R'.
Conversely, for a k-dimensional subspace of R' to represent a realization space R(B, E) for some E and B,
there must be at least one vector in its row span with the same sign as each row in E. Therefore, affine hyper-
plane arrangements in a (k - 1) dimensional space have a one-to-one correspondence with arrangements in
k dimensions with all the hyperplanes through the origin and subspaces of dimension k in R'.
Now we want to use this subspace to read off a well-defined and unique representative of R(B, E). This
can be done by considering the intersection of this subspace with the coordinate hyperplanes in R'. This
intersection gives a unique k-dimensional arrangement of hyperplanes through the origin that cuts through
through the orthants whose defining sign vectors are the rows of E.
This also tells us that if the subspace intersects a certain cell ,then it also intersects the opposite cell.
Therefore multiplication of all columns by the same column vector gives equivalence classes of equisepara-
Now we consider a couple of reformulations to the infinite sequence conjecture above. We believe that if
we only consider number of rows that are powers of 2, then the stronger claim that the infinite sequence is a
subsequence of powers of 2 is also true. We also believe that the weaker conjecture restricting the number of
hyperplanes to be n is sufficient to prove this stronger conjecture. Therefore this would bring into question
the importance of freedom of number for hyperplanes in the problem.
We can consider only equiseparation matrices that are Hadamard matrices of Sylvester type. This refor-
mulation turns out to be necessary and sufficient for the applications we use.
II The Journey: Results and Techniques
11.1 Results for general n
Now we would like to summarize the results that have been achieved for arbitrary dimensions. First we will
consider the strongest lower and upper bounds we have for arbitrary matrices.
Both of these bounds for the equiseparation problem are based on a formulation of the problem in terms
of Hadamard matrices. This method is based on the assumption that if an equiseparation exists with 2P points
and hyperplanes passing through the origin, then there is one with hyperplanes passing through the origin
whose equiseparation matrix is the Hadamard matrix H2p. This is necessarily true in dimensions which are
powers of 2 and this is sufficient for our purposes.
Our use of Hadamard matrices will often involve considering them in Sylvester form. All Hadamard
matrices can be defined recursively in this form as :
H + [ H= p H2p1
Jurgen Forster provides the strongest lower bound in [?]. He proves that 2P/2 dimensions are necessary
for a Hadamard matrix HK2 to be realized by an arrangement of homogeneous half spaces. So for our equi-
separation problem, this implies that at least 2p/2 dimensions are necessary for an equiseparation of 2P points.
The proof for the current upper bound for the equiseparation is in an earlier paper [?]. It is that H2p can
be realized in 3/4 x 2P dimensions.
We also have a stronger lower bound for a more restricted set of equiseparations. Again we are con-
sidering equiseparations of 2P points, for some p. If each row of the equiseparation matrix is the linear
combination of two arbitrary rows of H2p, this gives a stronger lower bound of 2P 1 dimensions.
The results in [?] rely on the duality theorem that is presented there. This theorem states that for a sub-
space S and an orthant x, there exists a vector v E S whose signs exactly match the sign vector of x if and
only if there does not exist a vector v' e S; whose signs match the sign vector of x whenever v' is non-zero.
Therefore we can show that an arrangement is not realizable -by showing that such a v' exists.
In the interest of strengthening F6rster's lower bound, we reformulate it in geometric terms. The first step
in this is the construction of a bijection between aspects of his work and ours. This is captured in the theorem
below whose proof is in the appendix.
Theorem: For indexing sets X, Y, there exist ua,. e ]RE with x E X and y E Y such that M(x,y)
sign < us,. > if and only if there exists B E T I. suchthat Vx e X3.,'. E R Y such that us, is within
the row span of B and sign(.'.) Mx where Mx is the Xth row of M.
This theorem provides a geometric interpretation of F6rster's in the context of our equiseparation matri-
ces. This may allow help in extending it to take advantage of the structure of M and B in a way that leads to
a stronger lower bound.
11.2 Results for specific n
Now we would like to look at results for specific small dimensions. These proofs have aspects that don't eas-
ily extend to arbitrary dimensions. Here are a couple of techniques that work efficiently in lower dimensions.
First, to use these techniques, we need to make a few observations. Suppose we are trying to find an
equiseparation of d points in an N-dimensional space A that is divided into orthants by coordinate hyper-
planes. Now let the sign vector of an orthant be the vector which gives the signs of the terms of any vector
in the orthant. If we find a d-dimensional subspace B such that B intersects a set of orthants of A whose
sign vectors are orthogonal, then such an equiseparation exists. This equiseparation can be found by placing
points in the orthants that B intersects.
We break up the specific cases into considering Boolean or real matrices. When we restrict our consid-
erations of 4 dimensions to the Boolean case we have a stronger statement than in the real case. In [?], it is
shown that the 4 x 8 Boolean case is not realizable.
11.2.1 The Boolean Case
One technique for dealing with the equiseparation problem involves restricting ourselves to matrices whose
entries are ï¿½1. This restriction of B is justified because the equiseparation problem is a threshold circuit
So let the matrix B consist of entries from ï¿½ 1. In this technique, we are searching for a vector x such that
Bx 0 with x touching an orthant of H2p. Finding such a vector shows that there exists a nonzero vector in
B that touches an orthant of H2,. This then shows that B doesn't intersect every orthant of HK2. Therefore
an equiseparation doesn't exist.
So now we must consider matrices of specific sizes. Ultimately, we'd like to prove a 2P- 1 lower bound
for equiseparations. We have a proof that there exists such a vector if B is a 4 x 8 matrix. However, this
proof method doesn't extend very well.
One possible extension we considered was embedding smaller matrices Bk of size 2- 1 x 2k in larger
matrices of 2P-1 x 2P in the same recursive fashion as the Syvester form of the Hadamard matrix in the
klk B - Bk Bk]
Bw + I, Bk -B \
and forp > k + 1,
I Bp-l,k --Bp-l,
The initial hope was that there could be dual/orthogonal realizations of Bp,k that do not directly lead to
dual/orthogonal realizations of Bk. Then by taking p large enough, it might be easier to find a realization.
This however does not work since there is a direct bijection between these realizations which is shown by the
following theorem whose proof is in the appendix.
Theorem: There exists a nonzero vector x E R2 such that BkX 0 and x touches H11 iff Vp > k there
exists a nonzero vector y E R21 such that Bp,kY 0 and y touches H12.
This then prevents us from using the 4 x 8 boolean result in this way to gather information about the
general 2 -1 x 2P case.
11.2.2 The Real Case
One approach that we took to attempt to find out if the 4 x 8 real case was realizable was a Matlab pro-
gram. This program attempted to find contradictions based on inequalities of determinants formed by the
Grassman-Pluckerd relations from [?] and relations based on the Hadamard matrix. The time that this took
however made this program impractical. It did however lead to some observations about the nature of any
contradictions that could arise from these inequalities.
The inequalities from the Hadamard matrices were based on a characteristic of the row labels. For row
labels a, b, and c, the row a + b + c must have the same sign as the product of the signs of the original rows.
Similarly for the column labels. Now if we choose 5 row labels from the 8 possible in Hs, we are guaranteed
to get exactly 1 such set of 4 row labels among the 5 total row labels. This then leads to an inequality involv-
ing 4 determinants. This type of inequality is illustrated below.
Let B be an arbitrary 4 x 8 matrix with entries from the reals. Also let its column labels be from Z3. Let
the 5 labels chosen be a, b, c, a + b + c, and some other label d. Also let < wxyz > represent the deter-
minant of the matrix with columns w, x, y, and z. Then the inequality comes from considering the matrix
[abc(a + b + c)] and alternately replacing each row by the other column d to get these 4 matrices. This uses
< dbc (a+ b+c) >< a bd (a+ b+c) >< abcd>< 0
These then gives (j) inequalities for a 4 x 8 matrix B from the structure of the Hadamard matrix. We can
also get inequalities based on Grassmanians. The idea is that these inequalities would form contradictions
with the Hadamard inequalities. The Grassman relations state that for column vectors a, b, w, x, y, and z this
We may then organize these determinants by their column labels by noticing that flipping adjacent
columns negates the determinant of the matrix. Therefore we can get a set of determinants and assign signs
to some of them. This would dictate the signs of other determinants. We could then determine whether any
assignment of signs would avoid causing a contradiction. However the number of the determinants proved
One thing that we investigated to speed up the program was the possible types of inequalities that could
give a contradiction. Then we could only look to see if these inequalities were implied by a particular assign-
ment. For example, with only three inequalities, all inequalities that give contradictions will be of the form:
E x C > A x F, and
However we found that three inequalities were not sufficient to give us contradictions in all cases. The
program then continued searching for contradictions using a greater number of inequalities. This made the
program too slow for practical purposes.
11.3 Symmetry and its implications
This leads to a consideration of the symmetry of the polytopes. This then relates to considering which
transformations a particular polytope is invariant under. In terms of the equiseparation problem, finding the
symmetries of a polytope corresponds to finding which transformations preserve its cutting through the same
orthants of H2,. This information would help in that it would provide a symmetric structure that could be
taken advantage of for arbitrary dimensions. There are more tools for dealing with symmetric polytopes than
for arbitrary polytopes. Ideally, it would be wonderful if we could find a polytope that was symmetric with
respect to all transformations. However, Peter Shor showed that this is not possible.
The consideration of these symmetries restricts the polytopes that we wish to consider to a much smaller
set. This relates to our belief that a solution to the equiseparation problem would not be naturalizable. The
symmetries of these polytopes would take advantage of a very specialized property of the equiseparation
problem. This is one of the methods suggested by Razborov and Rudich [?] that may allow a proof that isn't
naturalizable and therefore may be useful in strong proofs of lower bounds.
III Appendix: Proofs
Theorem: For indexing sets X, Y, there exist ux, - E RV with x e X andy E Y such that M(x,y) sign <
ux, vy > if and only if there exists B E T. "' such that Vx E X.,. E R such that ux, is within the row
span of B and sign(u') Mx where Mx is the xth row of M.
Proof:(=@) Therefore in this case we are given X, Y and the tax's and . 's. We must then construct an
appropriate B with satisfying sign(ua)'s. This is done by taking the columns of B to be the given . 's as
B [ v1 v2 ... vy\ ]
B is then a k x |Y matrix and let rl, r2, ..., rk represent the k rows of B. Also let ..4 denote the ith term
of the vector Uk. Similarly let r' denote the ith term of the vector r .
Now, for each j, let
U' ulri + U"r2 + ... .' +'
Then it follows that,
u'j (< Uj,Yl >, < Uj,Y2 >, ..., < Uj, YlY\ >)"
This follows since the ith term of u' will equal ur + ..,+ ...+.. Thisequals < U, y >bythe
definition of the inner product. Since M(-,Y) sign < ua, . >, this implies that sign(u'[) Ms since
each ith term of u'a equals < au, vi >. Also since for all j, u' is in the row span of B, this completes this
direction of the proof.
Proof:(@=) In this case we are given B and the vectors ..' We then go backwards from the previous
Let the . 's equal the columns of the matrix B. Now we must construct the ux's from the u' 's. Since
each u'y is in the row span of B, each u'a can be represented as:
airi + a2r2 + ... + akk for some constants a,, ..., ak where the a's are different for each ..'
Now let uk (al, a2, ..., ak). Then
u'j (< Uj, Y1 >, < Uj, Y2 >, ..., < Uj, Yï¿½Y\ >)"
This therefore implies that M(,) = sign < ua, vy >. U
Theorem: There exists a nonzero vector x E R2k such that BkX 0 and x touches H1 iff Vp > k there
exists a nonzero vector y e R2 such that Bp,kY 0 and y touches H1.
Proof: (-=) This is easily constructed by simply letting y = [xxx...x] where y is the composition of 2p-
x's. Notice that Bp,k can be viewed as a composition of 2k 1 x 2k matrices A ,j in such a way,
A(o,o) A(o,1) ... A(0,p-k)
A(1,o) A(,,1) ... A(1,p k)
SA(pk,o) A(p-k,i) ... A(p-k,p k)
where each Aij = Bk. Since BkX = 0, this obviously implies -BkX = 0. Therefore it is easy to see
that Bky - 0.
Also notice that H2p can be viewed in the same form as Bp, k with each A,j = H2 . Also notice that Vj,
A,j = H2k. Therefore since x touches some row in H2k, this implies that y touches some row in the first
2- 1 rows of H2y. Therefore y satisfies the conclusion.
(=) Now assume that 3y such that Bp,kY 0 and y touches H2. Now break y into two halves, yi and
y2, where yi is the first 2P- 1 elements and y2 is the second 2P- 1 elements. Then from the construction of
Bp,k, implies Bp-1,kYl + Bp- 1,kY2 0 and Bp-,kYl - Bp- 1,kY2 0. Therefore Bp-1,kYl 0.
This then implies that Bp- 1,kY2 0. Since y is nonzero, at least one of yi and Y2 is nonzero. Let z denote
yi if yi is nonzero and y2 if y1 equals the zero vector.
Now show that z or -z touches Hf 1. If z equals yi, then z must touch H1 1 since y touches H1. This
comes from the recursive nature of Hf. However if z equals Y2, then z touches HP 1 or -Hf 1 for the same
reason. So z or -z touches Ht 1. Let z denote the one that touches Ht1.
Since both Bp -1,kz 0 and Bp- 1,k(-z) 0, this shows that there exists a nonzero vector barz
such that Bp 1,kz 0 and z touches HK 1. Therefore, by simple induction, there exists a nonzero vector
x E R such that BkX 0 and x touches H4. U
 A. Bj6rner, M. Las Vergnas, B. Strumfels, N. White, G. Ziegler: "Oriented Matroids" Encyclopedia of
Computation 5-14 Cambridge University Press (1993)
 J. F6rster. "A Linear Lower Bound on the Unbounded Error Probabilistic Communication Complexity".
IEEE Computer Society Press, (2001)
 Michael I. Jordan. "Vapnik-Chervonenkis Dimension" Lecture Notes. University of California at
 A. Razbarov and S. Rudich. "Natural Proofs". ACM Symposium on Theory of Computing, (1996)
 John H. Reif and Stephen R. Tate. "On Threshold Circuits and Polynomial Computation." SIAM Journal
on Computing (1992)
 M. Sitharam, Matthew Belcher, and Steven Hicks."Equiseparations", ACM SE Conference (2001)