# Ham-Sandwich Cuts

*Aurélien Ooms*

# Introduction

## What is this web page for?

The goal of this web page is to introduce you to computational geometry using the Ham-Sandwich theorem as a guideline, and in particular, the application of this theorem to the 2 dimensional plane.

## Ham sandwich theorem

In measure theory, a branch of mathematics, the ham sandwich theorem, also called the Stone–Tukey theorem after Arthur H. Stone and John Tukey, states that given

`d measurable "objects"`

in`d-dimensional space`

, it is possible to`divide all of them in half`

(with respect to their measure, i.e. volume) with a single`(d-1)-dimensional hyperplane`

. Here the "objects" should be sets of finite measure (or, in fact, just of finite outer measure) for the notion of`"dividing the volume in half"`

to make sense.

## Explanation

What this theorem implies for the two dimensional plane is that for any two sets of points, we can find a cut (i.e. a line) that bisects (i.e. divides in two) each of the two sets.

## Why

Ham sandwich theorem?The ham sandwich theorem takes its name from the case when

`d = 3`

and the three objects of any shape are a chunk of ham and two chunks of bread — notionally, a sandwich — which can then all be simultaneously bisected with a single cut (i.e., a plane). In two dimensions, the theorem is known as the pancake theorem of having to cut two infinitesimally thin pancakes on a plate each in half with a single cut (i.e., a straight line).

# Vocabulary List

## Hyperplane

In geometry, as a plane has one less dimension than space, a hyperplane is a subspace of one dimension less than its ambient space. A hyperplane of a d-dimensional space is a flat subset with dimension

`d-1`

. By its nature, it separates the space into two half spaces.

## Point Set Bisection

A hyperplane \(h\) is said to bisect a set \(P\) of \(n\) points in \(R^d\) if no more than \(\frac{n}{2}\) points of \(P\) lie in either of the open half-spaces defined by \(h\).

## Duality (projective geometry)

A striking feature of projective planes is the "symmetry" of the roles played by points and lines in the definitions and theorems, and (plane) duality is the formalization of this metamathematical concept.

## Simulation of Simplicity

Simulation of Simplicity can be used to cope with degenerate input data for geometric algorithms. It relieves the programmer from the task to provide a consistent treatment for every single special case that can occur. The programs that use the technique tend to be considerably smaller and more robust than those that do not use it.

## Monte Carlo algorithm

In computing, a Monte Carlo algorithm is a randomized algorithm whose running time is deterministic, but whose output may be incorrect with a certain (typically small) probability.

The related class of Las Vegas algorithms is also randomized, but in a different way: they take an amount of time that varies randomly, but always produce the correct answer. A Monte Carlo algorithm can be converted into a Las Vegas algorithm whenever there exists a procedure to verify that the output produced by the algorithm is indeed correct. If so, then the resulting Las Vegas algorithm is merely to repeatedly run the Monte Carlo algorithm until one of the runs produces an output that can be verified to be correct.

The name refers to the grand casino in the Principality of Monaco at Monte Carlo, which is well-known around the world as an icon of gambling.

# Algorithms for `d=2`

## A simple approach

Since the theorem claims that there is always a solution, we could use a simple trick : there will always exist a HS cut that goes through at least one blue and at least one red point. Why? Well because if a partition exists, either the line defining this partition respects the above condition or we could slightly rotate and/or translate this line in order to make it go through at least one blue and one red point.

Another way of proving this is to notice that a ham-sandwich cut will have to go through at least one point of each set that has odd cardinality, and then notice that for each of these sets if we have a ham-sandwich cut for \(n\) odd then it is also a ham-sandwich cut for \(n+1\).

Let \(B\) be the set of blue points and \(R\) be the set of red points. The algorithm is the following : explore each pair \((b, r)\) where \(b \in B\) and \(r \in R\) and return \((b, r)\) if it defines a HS cut.

Note that this approach can be used for any \(d \in N\). The algorithm runs in $$ O( (\prod_{i=1}^{d} |P_i|) (\sum_{i=1}^{d} |P_i|) ) \in O((\sum_{i=1}^{d} |P_i|)^{d+1}) $$ hence for`d=2`

the complexity is \(O(n^3)\), this is what [3] calls a`brute-force approach`

.

```
var hs_n3 = function(a, b){
if(a.length === 0 && b.length !== 0) return hs_n3(b, a);
var r = Math.floor(a.length / 2);
var s = Math.floor(b.length / 2);
for(var i = 0; i < a.length; ++i){
var tmp = b.length === 0 ? [{x : a[i].x, y : a[i].y + 1}] : b;
for(var j = 0; j < tmp.length; ++j){
var hi = [0, 0], lo = [0, 0];
for(var k = 0; k < i; ++k){
var x = sin_sign(a[i], tmp[j], a[k]);
if(x > 0) ++hi[0];
else if(x < 0) ++lo[0];
}
for(var k = i + 1; k < a.length; ++k){
var x = sin_sign(a[i], tmp[j], a[k]);
if(x > 0) ++hi[0];
else if(x < 0) ++lo[0];
}
for(var k = 0; k < j; ++k){
var x = sin_sign(a[i], tmp[j], tmp[k]);
if(x > 0) ++hi[1];
else if(x < 0) ++lo[1];
}
for(var k = j + 1; k < tmp.length; ++k){
var x = sin_sign(a[i], tmp[j], tmp[k]);
if(x > 0) ++hi[1];
else if(x < 0) ++lo[1];
}
if(hi[0] <= r && lo[0] <= r && hi[1] <= s && lo[1] <= s) return [a[i], tmp[j]];
}
}
return null;
};
geo.hs_n3 = hs_n3;
```

## Examples

## Using duality

First let's introduce some notions about point-line duality.

## The point "point of view"

`one and only one line goes through two points`

`the point is above the line`

## The line "point of view"

`two lines intersect in one and only one point`

`the line is above the point`

## How could we make use of such a tool?

The original problem consists in finding a line that bisects the set of blue points and the sets of red points, in other words

find a line which has half of the blue points and half of the red points above it and the rest of the points below it. This can be translated to the dual problem just by inverting the roles of points and lines. Wonderful!

## After the translation

We're now trying to

find a point which has half of the blue lines and half of the red lines above it and the rest of the lines below it. What could be the algorithm for that?

## An algorithm for the dual

In the dual plane, regarding the set of blue lines we could follow the path of it's median level, that is the path that bisect the blue set. This can be done by enumerating and sorting all the intersections between blue lines (\(O(n^2 log n)\))and taking the blue line with the median slope as the beginning of the path. We could do the same for the set of red lines.

If we now look at the intersections of the paths we just build we can conclude they are points that bisect the 2 sets, hence their corresponding line in the primal are HS cuts.

```
+function(geo){
// PRIMAL COMPARATOR
var pcomp = function(x, y){ return x.x < y.x || (x.x === y.x && x.y <= y.y); };
// DUAL COMPARATOR
var dcomp = function(a, b){ return a.a < b.a || (a.a === b.a && a.b >= b.b); };
// EVENT COMPARATOR
var ecomp = function(e, d){ return pcomp(e[0], d[0]); };
var hs_dual = function(dual, path, I){
// INTERSECTIONS EVENT LIST
// e[0] -> first set intersection events
// e[1] -> second set intersection events
//
// An event is a triple t where :
// t[0] is the event intersection point
// t[1] and t[2] are the lines whose intersection is t[0]
var e = [];
// THE path VARIABLE WILL BE USED TO REPRESENT
// EACH MEDIAN PATH BY A LIST OF INTERSECTION POINTS
// (EACH PATH CAN BE DRAWN BY LINKING THE FIRST INTERSECTION TO THE SECOND,
// THE SECOND TO THE THIRD, ETC.)
// WHILE THE lines VARIABLE IS USED TO REPRESENT
// EACH MEDIAN PATH BY A LIST OF LINES
// (EACH LINE OF EACH SET IS ON THE MEDIAN PATH OF THEIR SET
// UNTIL THEY CROSS THE NEXT LINE IN THE LIST)
var lines = [];
// FOR EACH SET
for(var d = 0; d < dual.length; ++d){
path.push([]);
// INTERSECTIONS ARE STORED IN I
I.push([]);
e.push([]);
lines.push([]);
var l = dual[d].length;
if(l > 0){
// COMPUTE ALL INTERSECTIONS AND FILL EVENT LIST
for(var i = 0; i < l; ++i){
I[d][i] = [];
for(var j = 0; j < i; ++j){
I[d][i][j] = intersect(dual[d][i], dual[d][j]);
if (I[d][i][j].x === Infinity) I[d][i][j] = undefined;
else if(I[d][i][j].x === -Infinity) I[d][i][j] = undefined;
else e[d].push([I[d][i][j], dual[d][i], dual[d][j]]);
}
}
// SORT THE EVENTS
shuffle(e[d], 0, e[d].length);
quicksort(e[d], 0, e[d].length, ecomp);
// SELECT MEDIAN SLOPE (MEDIAN LEVEL AT -Infinity)
var m = Math.floor(l/2);
shuffle(dual[d], 0, dual[d].length);
quickselect(m, dual[d], 0, dual[d].length, dcomp);
// BEGIN OF PATH
var bisect = dual[d][m];
path[d].push(intersect(lwall(), bisect));
lines[d].push(bisect);
for(var i = 0, len = e[d].length; i < len; ++i){
for(var k = 1; k < 3; ++k){
// IF ONE OF THE LINES OF THE EVENT IS ON THE MEDIAN LEVEL
// -> SWAP THEM (i.e. the other line becomes the median level)
if(e[d][i][k] === bisect){
bisect = e[d][i][3-k];
path[d].push(e[d][i][0]);
lines[d].push(bisect);
break;
}
}
}
// END OF PATH
path[d].push(intersect(bisect, rwall()));
}
}
var len = [path[0].length - 1, path[1].length - 1], x;
// HANDLE EMPTY CASES
if(len[0] === -1){
if(len[1] > -1)
// FIRST SET EMPTY, ANY POINT ON THE SECOND PATH IS A BISECTION
return path[1][0];
else
// TWO SETS ARE EMPTY
return null;
}
else if(len[1] === -1)
// SECOND SET EMPTY, ANY POINT ON THE FIRST PATH IS A BISECTION
return path[0][0];
// HANDLE INFINITES CASES
// PATH BEGINS OR ENDS PARALLEL
// (INTERSECT OF PARALLEL LINES IS -/+ Infinity)
if(lines[0][0].a === lines[1][0].a){
return intersect(lines[0][0], lines[1][0]);
}
if(lines[0][len[0]-1].a === lines[1][len[1]-1].a){
return intersect(lines[0][len[0]-1], lines[1][len[1]-1]);
}
// ELSE
for(var i = 0; i < len[0]; ++i){
for(var j = 0; j < len[1]; ++j){
if(intersect_ss(path[0][i], path[0][i+1], path[1][j], path[1][j+1])){
return intersect(lines[0][i], lines[1][j]);
}
}
}
return null;
};
geo.hs_dual = hs_dual;
}(geo)
```

## An efficient approach

In [3] Lo et al. propose an optimal algorithm for

`d=2`

running in \(O(n)\). The idea is to search, in the dual plane, an interval where the cut is easy to find/compute. The search will be done in steps for which the complexity is bound by \(a^i \cdot O(n)\), meaning that at each step, we will discard at least \(n(1-a)\) blue or red lines, ensuring a total complexity of \(O(n)\). Let's first introduce some notions.

## The odd intersection property

An interval \(T = [l, r]\) on the \(x\)-axis is said to have the odd intersection property if the median levels of \(B\) and \(R\) cross an odd number of times, i.e. this is true if the median levels have different \(y\) ordering in \(l\) and \(r\). In particular with \(T = [-\infty, +\infty]\) (i.e. the whole \(x\)-axis) this property always holds (assuming \(B\) and \(R\) have odd cardinality). A way to resume this is the following formula $$ (\lambda_1(l) - \lambda_2(l))(\lambda_1(r) - \lambda_2(r)) < 0 $$ where \(\lambda_i(x)\) is the \(y\) coordinate of the median level of the \(i^{th}\) set at \(x\).

The interval starting at the first \(B\) intersection and ending at the second \(B\) intersection has the odd intersection property.

## Trapezoid filtering

Filtering lines out of a trapezoid simply means discarding the lines that don't meet the trapezoid.

The way we are building the trapezoid must give us the guarantee that a at least a constant fraction of the blue lines will be discarded to keep the \(O(n)\) complexity bound.

$$ \sum_{i=0}^{\infty} a^i = (1 - a)^{-1}, \forall~0 < a < 1 $$

The blue line on the right will be discarded.

## The procedure

The input is an interval \(T = [l, r]\) on the \(x\)-axis, sets \(G_1\) and \(G_2\) of blue and red lines (\(|G_1| \geq |G_2|\))) and integers \(p_1\) and \(p_2\) that indicate the p-levels in \(G_1\) and \(G_2\) corresponding to the median levels of the original sets.

Steps are the following:

- Divide the interval \(T\) into a constant number of subintervals \(T_1, \dots, T_C\) (\(C \geq \frac{1}{\alpha}\)) such that no \(V(T_i)\) contains more than a prescribed (constant) fraction of the vertices of the arrangement of \(G_1\) [\(O(m_1)\)].
- Find one subinterval \(T_i\) with the odd intersection property [\(O(m_1 + m_2)\)].
- Construct a trapezoid \(\tau \in V(T_i)\), such that:
[\(O(m_1)\)]

- \(\lambda_1 \cap V(T_i) \subset \tau\)
- At most half of the lines of \(G_1\) intersect \(\tau\)
- Discard all the lines of \(G_1\) which do not intersect \(\tau\) (at least \(m_1/2 > (m_1 + m_2)/4\) lines), and update \(p_1\) accordingly (\(p_1' \gets p_1 - b\) , \(b\) denoting the number of discarded lines of \(G_1\) lying completely below \(\tau\)). Then \(T_i\) becomes the new \(T\), and we are ready for the next phase of the algorithm (i.e. recursive call, swaping \(G_1'\) and \(G_2'\) if \(|G_1| < |G_2|\)) [\(O(m_1 + m_2)\)].

## Explanation

Step 1 can be achieved in \(O(m_1)\) by applying a theorem on approximate selection of the \(k^{th}\) leftmost intersection found in [4] . Since the number of \(T_i\)subintervals is constant, step 2 can be achieved in \(O(m_1 + m_2)\) by using quickselect for \(p^{th}\) slope selection on both sets. For steps 3 and 4, we need to construct a trapezoid \(\tau\)such that at least a constant fraction (i.e. \(\frac{1}{2}\)) of the lines of \(G_1\) are discarded. We choose \(\alpha\) and \(\epsilon\), \(f(\alpha) \leq \epsilon\), and we proove by construction that if \(f(\alpha) = \sqrt{\frac{\alpha}{2}}\) conditions in 3 holds.

The corners of the top of \(\tau\) are the intercepts \(D^+_l, D^+_r\) of the vertical lines at \(x = l\) and \(x = r\) with the (\(p_1 + \epsilon m_1\))-level at \(x = l\) and \(x = r\). The corners of the bottom of \(\tau\) are the intercepts \(D^-_l, D^-_r\) of the vertical lines at \(x = l\) and \(x = r\) with the (\(p_1 - \epsilon m_1\))-level at \(x = l\) and \(x = r\).

Consider the top of \(\tau\), the segment \(\sigma = D^+_l D^+_r\). The lines of \(G_1\) that meet \(\sigma\) are partitioned into two classes, \(\mathcal{S}\), the lines with slope smaller than that of \(\sigma\), and \(\mathcal{L}\), those with larger slope.

Traversing a from left to right, we keep count of the number of \(G_1\)lines below. At the start, there are \(p_1 + \epsilon m_1\) lines below. When we meet a line in \(\mathcal{S}\), the count increases by one, and when we meet an \(\mathcal{L}\)-line, it decreases by one. At the end there are again \(p_1 + \epsilon m_1\) lines below. Hence \(|\mathcal{S}|\) = \(|\mathcal{L}|\).

Each \(\mathcal{S}\)-line intersects each \(\mathcal{L}\)-line within the vertical strip \(V(T_i)\). Since this strip contains at most $$ \alpha {m_1 \choose 2} < \frac{\alpha m_1^2}{2} $$ intersections, by the construction in 1, we have $$ |\mathcal{S}|^2 = |\mathcal{S}| |\mathcal{L}| < \frac{\alpha m_1^2}{2} $$ so \(|\mathcal{S}| = |\mathcal{L}| < \sqrt{\alpha/2} m_1\). Since \(\sigma\) is \(\epsilon m_1\) lines above the \(p_1\)-level at both endpoints of the interval \(T_i\), the \(p_1\)-level remains below a as long as $$ \sqrt{\frac{\alpha}{2}} \leq \epsilon $$ The same argument will show that the \(p_1\)-level never breaks below the bottom \(\tau\).

There are \(2 \epsilon m_1\) lines that intersect each vertical side of \(\tau\). There are less than \(2 \sqrt{\alpha/2} m_1\) that intersect each of the top and bottom sides. This sums up to \(4 \epsilon m_1\ + 4 \sqrt{\alpha/2} m_1\) which is less than \(8 \epsilon m_1\) intersections. Since each line going through \(\tau\) instersects with exactly 2 sides of \(\tau\), we have at most \(4 \epsilon m_1\) lines going through \(\tau\). If we want to keep only half of the \(G_1\) lines at each step, we choose \(\epsilon = \frac{1}{8}\) and \(\alpha = \frac{1}{32}\).

## Practical Monte Carlo Algorithm

Below is an implementation of a Monte Carlo algorithm based on the optimal algorithm. This algorithm gives an Ham-Sandwich cut with high probability. The main difference with the optimal algorithm from [3] is that rather than constructing subintervals that contains exactly no more than \(\alpha {m_1 \choose 2}\) we randomize the construction by taking a sample of size \(k\) on the set \(G_1\) computing a constant number (\({k \choose 2}\)) of intersections on this sample. The sample will be close to uniform on the density of the \(G_1\) intersections and thus close to the condition in 1. You can find more details in [5] .

```
+function(geo){
// x VALUES COMP OP [O(1)]
var xcomp = function(a, b){
return a < b;
};
// DUAL LINE SLOPE COMPARISON OP [O(1)]
var dcomp = function(a, b){
return a.a < b.a || (a.a === b.a && a.b >= b.b);
};
// DUAL LINE LEVEL COMPARISON OP FOR SOME X [O(1)]
var lcomp = function(x){
if(x === -Infinity) return function(a,b){ return !dcomp(a,b); };
if(x === Infinity) return dcomp;
return function(s, l){
var y1 = fx(s, x);
var y2 = fx(l, x);
return y1 < y2 || (y1 === y2 && dcomp(s, l));
};
};
// QUICKSELECT p^{th} SLOPE AT X [O(n)]
var pselect = function(p, G, x){
shuffle(G, 0, G.length);
quickselect(p, G, 0, G.length, lcomp(x));
return G[p];
};
// BUILD TRAPEZOID τ [O(n)]
// t[0] <- bottom left
// t[1] <- bottom right
// t[2] <- top left
// t[3] <- top right
var tbuild = function(G1, T, e, p1){
T = T.slice();
if(T[0] === -Infinity) T[0] = lwall().x;
if(T[1] === Infinity) T[1] = rwall().x;
var t = [], m;
m = Math.max(0, Math.floor(p1 - e * G1.length));
t[0] = px(pselect(m, G1, T[0]), T[0]);
t[1] = px(pselect(m, G1, T[1]), T[1]);
m = Math.min(G1.length - 1, Math.ceil(p1 + e * G1.length));
t[2] = px(pselect(m, G1, T[1]), T[1]);
t[3] = px(pselect(m, G1, T[0]), T[0]);
return t;
};
// FILTER LINES OUT OF TRAPEZOID τ [O(n)]
// @return the number of lines below the trapezoid
var tfilter = function(G1, t){
var S = G1.splice(0), s = 0;
for(var i = 0; i < S.length; ++i){
var a = fx(S[i], t[0].x) - t[0].y;
var b = fx(S[i], t[1].x) - t[1].y;
var c = fx(S[i], t[2].x) - t[2].y;
var d = fx(S[i], t[3].x) - t[3].y;
if(a < 0 && b < 0 || c > 0 && d > 0){
s += a < 0;
}
else{
G1.push(S[i]);
}
}
return s;
};
// SAMPLE K LINES [O(k)]
var ksample = function(k, G1){
sample(k, G1, 0, G1.length);
};
// COMPUTE X INTERVALS IN U FORMED BY INTERSECTIONS OF A SAMPLE OF SIZE k [O(k²)]
// *T[0] and T[len-1] are guaranteed to be the min and max*
var kinterval = function(k, G1, U){
var T = [U[0]];
for(var i = 0; i < k - 1; ++i){
for(var j = i + 1; j < k; ++j){
var c = intersectx(G1[i], G1[j]);
if(c !== null && c > U[0] && c < U[1]){
T.push(c);
}
}
}
T.push(U[1]);
return T;
};
// FIND AN INTERVAL FOR WHICH THE ODD INTERSECTION PROP HOLDS [O(k²)]
// *randomized version*
var find_odd_interval_rand = function(T, G1, G2, p1, p2, callback){
var x = [0, undefined, T.length - 1], r, p, q;
shuffle(T, x[0] + 1, x[2]);
while(x[0] + 1 !== x[2]){
x[1] = partition(T, x[0] + 1, x[0] + 2, x[2] - 1, xcomp);
r = randint(0, 2);
p = T[x[r]];
q = T[x[r + 1]];
var a = pselect(p1, G1, p);
var b = pselect(p1, G1, q);
var c = pselect(p2, G2, p);
var d = pselect(p2, G2, q);
callback[0]([p, q], a, b, c, d);
if(is_odd([p, q], a, b, c, d)){
x[2 - 2*r] = x[1];
callback[1](true);
}
else{
x[2*r] = x[1];
callback[1](false);
}
}
return [T[x[0]], T[x[2]]];
};
// FIND AN INTERVAL FOR WHICH THE ODD INTERSECTION PROP HOLDS [O(k²)]
var find_odd_interval = function(T, G1, G2, p1, p2, callback){
var x = [0, undefined, T.length - 1], l, i, r;
shuffle(T, x[0] + 1, x[2]);
while(x[0] + 1 !== x[2]){
x[1] = partition(T, x[0] + 1, x[0] + 2, x[2] - 1, xcomp);
l = T[x[0]];
i = T[x[1]];
r = T[x[2]];
var a = pselect(p1, G1, l);
var b = pselect(p1, G1, i);
var d = pselect(p2, G2, l);
var e = pselect(p2, G2, i);
callback[0]([l, i], a, b, d, e);
if(is_odd([l, i], a, b, d, e)){
x[2] = x[1];
callback[1](true);
}
else{
callback[1](false);
var c = pselect(p1, G1, r);
var f = pselect(p2, G2, r);
callback[0]([i, r], b, c, e, f);
if(is_odd([i, r], b, c, e, f)){
x[0] = x[1];
callback[1](true);
}
else{
callback[1](false);
return null;
}
}
}
return [T[x[0]], T[x[2]]];
};
// DETERMINE IF INTERVAL T HAS THE ODD PROPERTY [O(1)]
// generalization of (fx(a, T[0]) - fx(b, T[0])) * (fx(a, T[1]) - fx(b, T[1])) < 0;
var is_odd = function(T, a1, a2, b1, b2){
var c = [lcomp(T[0]), lcomp(T[1])];
return T[0] === -Infinity && a1.a === b1.a ||
T[1] === Infinity && a2.a === b2.a ||
c[0](a1, b1) + c[1](a2, b2) === 1;
};
// MAIN ALGORITHM
var hs_n_t = function(find_odd_interval){
return function(dual, callback){
var e = 1/7;
var C = 3.146316381855616;
var _k = Math.ceil(C * Math.pow(e, -2) * Math.log(Math.pow(e, -1)));
var k = [_k, _k];
var T = [-Infinity, Infinity];
var G = [dual[0].slice(), dual[1].slice()];
var i = 0, done = [false, false];
var p = [Math.floor(G[0].length / 2), Math.floor(G[1].length / 2)];
callback['init']();
do{
while(!done[i] && G[i].length >= G[1-i].length && G[1-i].length > 0){
callback['step'](i, G);
// SAMPLE REMAINING G1 LINES
if(k[i] < G[i].length) ksample(k[i], G[i]);
else{
// KINTERVALS ARE NOW INTERSECTION FREE
k[i] = G[i].length;
done[i] = true;
}
callback['ksample'](G[i], k[i]);
// COMPUTE G1 INTERSECTIONS IN INTERVAL T
var u = kinterval(k[i], G[i], T);
callback['kinterval'](u);
if (u.length === 2)
// NO MORE G1 INTERSECT IN INTERVAL
// MEANS MEDIAN LEVEL of G1 IS THE SAME LINE
// ON THE WHOLE INTERVAL T
break;
// BINARY SEARCH ODD INTERVAL
T = find_odd_interval(u, G[i], G[1-i], p[i], p[1-i], callback['find_odd_interval']);
if(T === null)
// BAD PREVIOUS epsilon-APPROXIMATION
return null;
// BUILD TRAPEZOID
var t = tbuild(G[i], T, e, p[i]);
callback['tbuild'](t);
// FILTER G1 LINES OUT OF TRAPEZOID
p[i] -= tfilter(G[i], t);
callback['tfilter'](G[i], p[i]);
}
if(G[0].length === 0 || G[1].length === 0) return null;
// SWAP
i = 1 - i;
} while(!done[i]);
// THE MEDIAN LEVEL OF G1 AND G2 ON INTERVAL T IS COMPOSED OF A SINGLE LINE SEGMENT
if(T[0] === -Infinity && T[1] === Infinity) T[0] = 0;
var x = (T[0] + T[1]) / 2;
var P1 = pselect(p[0], G[0], x);
var P2 = pselect(p[1], G[1], x);
return intersect(P1, P2);
};
};
var hs_n = hs_n_t(find_odd_interval);
var hs_n_rand = hs_n_t(find_odd_interval_rand);
geo.hs_n_t = hs_n_t;
geo.hs_n = hs_n;
geo.hs_n_rand = hs_n_rand;
}(geo)
```

## Examples

# Variants of Ham Sandwich Cuts

## Equitable

`n-cutting`

(\(\dots\) to be continued)

- Sergei Bespamyatnikh, David G. Kirkpatrick, Jack Snoeyink: Generalizing Ham Sandwich Cuts to Equitable Subdivisions. Discrete & Computational Geometry (DCG) 24(4):605-622 ()
- Atsushi Kaneko, Mikio Kano: Generalized Balanced Partitions of Two Sets of Points in the Plane. JCDCG 2000:176-186 ()
- Chi-Yuan Lo, Jirí Matousek, William L. Steiger: Algorithms for Ham-Sandwich Cuts. Discrete & Computational Geometry (DCG) 11:433-452 ()
- Jirí Matousek: Construction of epsilon-Nets. Discrete & Computational Geometry (DCG) 5:427-448 ()
- Larry Shafer, William L. Steiger: Randomizing Optimal Geometric Algorithms. CCCG 1993:133-138 ()
- Herbert Edelsbrunner, Ernst Peter Mücke: Simulation of simplicity: a technique to cope with degenerate cases in geometric algorithms. ACM Trans. Graph. 9(1):66-104 ()