Show that there is an absolute constant $c < 1$ with the following property: whenever $\mathcal P$ is a polygon with area $1$ in the plane, one can translate it by a distance of $\frac{1}{100}$ in some direction to obtain a polygon $\mathcal Q$, for which the intersection of the interiors of $\mathcal P$ and $\mathcal Q$ has total area at most $c$. Linus Hamilton
Problem
Source: USA TSTST 2018 Problem 9
Tags: combinatorics, geometry, geometric transformation, probability, topology
26.06.2018 19:53
I think the problem should be true for any Jordan curve whose interior has area 1, but I'm not sure. We denote area by $\vert \cdot \vert$. Let $f(\vec v)$ be the intersection of $\mathcal P$ and $\mathcal P$ shifted by $\vec v$. We wish to find $c$ so that there always exists $\Vert \vec v\Vert = \frac 1{100}$ and $f(\vec v)\leq c$. Claim: $f(\vec u) + f(\vec v) - 1 \leq f(\vec u + \vec v)$. Proof: Consider $\mathcal P'$, $\mathcal P$ shifted by $\vec u$, and $\mathcal P''$, $\mathcal P$ shifted by $\vec u + \vec v$. Then, note that $f(\vec u) = |\mathcal P \cap \mathcal P'|$, and $f(\vec v) = |\mathcal P' \cap \mathcal P''|$, and so the claim is equivalent to \[ |\mathcal P \setminus \mathcal P'| + |\mathcal P' \setminus \mathcal P''| + |P'\cap \mathcal P''| \geq 1, \]which is clear from $|\mathcal P'| = 1$ and the fact that the union of the first three sets contains $P'$. Corollary: $f(\vec u_1) + \cdots f(\vec u_n) \leq (n-1) + f(\vec u_1 + \cdots \vec u_n)$. Proof: Let $g(\vec u) = 1-f(\vec u)$, then the first claim is just $g(\vec u + \vec v) \leq g(\vec u) + g(\vec v)$, and the corollary is $g(\vec u_1+\cdots + \vec u_n) \leq g(\vec u_1)+\cdots+g(\vec u_n)$, which clearly follows from the first claim. Case 1: There is a unit disc $\mathcal D$ for which $\vert \mathcal P \cap \mathcal D\vert > \frac 12$. In this case, consider $D_1, D_2$, which are $\mathcal D$ translated by $2$ up and down, respectively. Note that $D_1, D_2, \mathcal D$ are disjoint, so there exists $i$ for which $\vert \mathcal P \cap \mathcal D_i\vert < \frac 14$. Suppose $D_i$ is $\mathcal D$ translated by $\vec v$. In that case, since $\mathcal P$ translated by $\vec v$ intersects $D_i$ with area $> \frac 12$, but $\mathcal P$ intersects it with area $<\frac 14$, there is at least $\frac 14$ of $\mathcal P$ translated by $\vec v$ not in $\mathcal P$, so that $f(\vec v) < \frac 34$. Let $\vec v = 200\vec u$, then $200f(\vec u) \leq 199+f(\vec v) < 199 + \frac 34$, so $f(\vec u) < \frac{799}{800}$. Case 2: For all unit discs $\mathcal D$, $\vert \mathcal P \cap \mathcal D\vert \leq \frac 12$. Let $\mathfrak D$ denote the unit disc centered at the origin. Also, let $\chi(\vec p)$ be zero if $\vec p$ is outside $\mathcal P$, and one otherwise. Then, this case is equivalent to \[ \iint_{\mathfrak D} \chi(\vec p + \langle x, y\rangle) \, dA \leq \frac 12. \]Then, integrating over all $\vec p$ in $P$, we have \[ \iint_{\mathcal P}\iint_{\mathfrak D} \chi(\vec p + \langle x, y\rangle) \, dA\, dA' \leq \iint_{\mathcal P} \frac 12 \, dA' = \frac 12. \]Then \[ \frac 12 > \iint_{\mathfrak D} \iint_{\mathcal P} \chi(\vec p+\langle x,y\rangle)\,dA'\,dA = \iint_{\mathfrak D} f(\langle x,y\rangle)\,dA. \]So, there exists $\vec w = \langle x,y\rangle \in \mathfrak D$ such that $f(\vec w) < \frac 1{2\pi}$, or else the integral would be $\geq \frac 12$. On the other hand, since $\Vert \vec w\Vert < 1$, there exist $\vec u_1, \ldots, \vec u_{100}$ with sum $\vec w$. Then, \[ f(\vec u_1) + \cdots + f(\vec u_{100}) < 99+f(\vec w) < 99 + \frac 1{2\pi}, \]so one of $f(\vec u_1)$ is at most $\frac{99+\frac{1}{2\pi}}{100}$. Thus, $c = \max\left(\frac{799}{800}, \frac{99+\frac{1}{2\pi}}{100}\right)$. Remark: I think the first case is actually not neccsary, and $\mathcal P$ intersecting discs with area at most $1$ is sufficient.
26.06.2018 20:03
The following solution is due to Brian Lawrence. We will prove the result with the generality of any measurable set $\mathcal{P}$ (rather than a polygon). For a vector $v$ in the plane, write $\mathcal{P} + v$ for the translate of $\mathcal{P}$ by $v$. Suppose $\mathcal{P}$ is a polygon of area $1$, and $\varepsilon > 0$ is a constant, such that for any translate $\mathcal{Q} = \mathcal{P} + v$, where $v$ has length exactly $\frac{1}{100}$, the intersection of $\mathcal{P}$ and $\mathcal{Q}$ has area at least $1 - \varepsilon$. The problem asks us to prove a lower bound on $\varepsilon$. Lemma: Fix a sequence of $n$ vectors $v_1$, $v_2$, \dots, $v_n$, each of length $\frac{1}{100}$. A grasshopper starts at a random point $x$ of $\mathcal{P}$, and makes $n$ jumps to $x + v_1 + \dots + v_n$. Then it remains in $\mathcal{P}$ with probability at least $1 - n\varepsilon$. Proof. In order for the grasshopper to leave $\mathcal{P}$ at step $i$, the grasshopper's position before step $i$ must be inside the difference set $\mathcal{P} \backslash (\mathcal{P} - v_i)$. Since this difference set has area at most $\varepsilon$, the probability the grasshopper leaves $\mathcal{P}$ at step $i$ is at most $\varepsilon$. Summing over the $n$ steps, the probability that the grasshopper ever manages to leave $\mathcal{P}$ is at most $n \varepsilon$. $\blacksquare$ Corollary: Fix a vector $w$ of length at most $8$. A grasshopper starts at a random point $x$ of $\mathcal{P}$, and jumps to $x + w$. Then it remains in $\mathcal{P}$ with probability at least $1 - 800 \varepsilon$. Proof. Apply the previous lemma with $800$ jumps. Any vector $w$ of length at most $8$ can be written as $w = v_1 + v_2 + \cdots + v_{800}$, where each $v_i$ has length exactly $\frac{1}{100}$. $\blacksquare$ Now consider the process where we select a random starting point $x \in \mathcal P$ for our grasshopper, and a random vector $w$ of length at most $8$ (sampled uniformly from the closed disk of radius $8$). Let $q$ denote the probability of staying inside $\mathcal P$ we will bound $q$ from above and below. On the one hand, suppose we pick $w$ first. By the previous corollary, $q \ge 1 - 800\varepsilon$ (irrespective of the chosen $w$). On the other hand, suppose we pick $x$ first. Then the possible landing points $x + w$ are uniformly distributed over a closed disk of radius $8$, which has area $64\pi$. The probability of landing in $\mathcal P$ is certainly at most $\frac{[\mathcal P]}{64\pi}$. Consequently, we deduce \[ 1 - 800\varepsilon \le q \le \frac{[\mathcal P]}{64\pi} \implies \varepsilon > \frac{1 - \frac{[\mathcal P]}{64\pi}}{800} > 0.001 \]as desired. Remark: The choice of $800$ jumps is only for concreteness; any constant $n$ for which $\pi(n/100)^2 > 1$ works. I think $n = 98$ gives the best bound following this approach.
26.06.2018 20:14
Here's a conceptually simple solution: We use the probabilistic method. Let $f(\theta)$ be the area of $\mathcal P \cap \mathcal Q$ for translation in direction $\theta$. If we can show that $\mathbb E \, f(\theta) < c$, then there exists some $\theta_0$ such that $f(\theta_0) < c$. Let $A_0$ be a uniformly random point in $\mathcal P$, and let $A_1$ be a random point of distance $1/100$ from $A_0$. It's easy to show that $$\mathbb E\, f(\theta) = \mathbb P(A_1 \in \mathcal P).$$Assume for the sake of contradiction that we cannot bound $\mathbb P(A_1 \in \mathcal P)$ by some constant $c < 1$. In other words, we can make this probability arbitrarily close to $1$. Now let $A_2$ be a random point of distance $1/100$ from $A_1$. Since the distribution of $A_1$ does not have has finite probability density at any point, we can also make $\mathbb P(A_2 \in \mathcal P)$ approach $1$. Continue this process, defining points $A_3, A_4, \dots, A_{100}$ analogously as an independently random point of distance $1/100$ from the previous. Next, fix $A_0$. The probability density of $A_{100}$ is positive for all points in a circle of radius $1$ centered at $A_0$, so for any fixed value of $A_0$, the probability that $A_{100}$ is in $\mathcal P$ is bounded by some constant $c$. Taking the expectation of this over all values of $A_0$, we see that $\mathbb P(A_{100} \in \mathcal P) < c$, which is a contradiction. $\Box$
03.07.2018 23:02
@v_Enhance: Nice solution. I know that the most important thing in a competition is to find a solution. There is no time to think about some clarifications. You can view the remarks below just as some shared thoughts, not in any way as a criticism. v_Enhance wrote: Lemma: Fix a sequence of $n$ vectors $v_1$, $v_2$, \dots, $v_n$, each of length $\frac{1}{100}$. A grasshopper starts at a random point $x$ of $\mathcal{P}$, and makes $n$ jumps to $x + v_1 + \dots + v_n$. Then it remains in $\mathcal{P}$ with probability at least $1 - n\varepsilon$. Proof. In order for the grasshopper to leave $\mathcal{P}$ at step $i$, the grasshopper's position before step $i$ must be inside the difference set $\mathcal{P} \backslash (\mathcal{P} - v_i)$. Since this difference set has area at most $\varepsilon$, the probability the grasshopper leaves $\mathcal{P}$ at step $i$ is at most $\varepsilon$. Summing over the $n$ steps, the probability that the grasshopper ever manages to leave $\mathcal{P}$ is at most $n \varepsilon$. $\blacksquare$ I'm risking to be too pedantic, since the claim is obvious. But, that probability approach in fact uses a specific nature of the translation, without mentioning it. Consider a transformation $T$ (in our case it's a translation). Suppose that only a small area $S\subset P$ leaves $P$ under that transformation, with $|S|/|P|=\varepsilon$. It indeed means that the probability a point $x\in P$ leaves $P$ under $T$ is $\varepsilon$. But it doesn't mean that the corresponding probability that it happens at step $i$ is also $\varepsilon$. Indeed, suppose there is a big area $S'\subset P$ such that $S=T(S')$. Then $T(T(S'))$ is outside $P$ at the second step and that probability may be as close to $1$ as we want. (providing $T$ is not so good as the translation is). I mean that all that argument strongly depends on the transformation $T$ (at least, it should preserve the area), and that probabilistic arguments should use that fact, thou implicitly. What happens in fact is, we first prove that obvious claim pure geometrically in our minds, and after that it is dressed in a probabilistic way. The last thing is absolutely redundant. Further, we can avoid at all the probabilistic language. Let $D$ be the unit disk. We want to show there is a translation $v\in D$, such that $(P+v)\cap P\leq 1-\varepsilon $, for some absolute constant $\varepsilon$. Suppose on the contrary it's not true. Consider the set $P\times D := \{(x,v)\mid x\in P, v\in D\}$. Let $P'\subset P\times D$ be a shape defined as: $(x,v)\in P'$ iff $x+v\in P$. In fact the dimension of $P'$ is $4$, but that doesn't matter. Now, for any fixed $x_0\in P$, for the "vertical" slit $P'_{x_0}:=\{(x_0, v)\mid (x_0,v)\in P'\} $, we have $m(P'_{x_0})\leq 1$, where $m$ is the measure (in this case area) of $P'_{x_0}$. It means: $$m(P')\leq \sup_{x_0\in P}m(P'_{x_0})\cdot m(P)\leq 1$$ On the other hand, consider the "horizontal" slit $P'_{v_0}:= \{(x,v_0)\mid (x,v_0)\in P'\}$. We have $m(P'_{v_0})\geq 1-\varepsilon$. It means: $$m(P')\geq \inf_{v_0\in D}m(P'_{v_0})\cdot m(D)\geq (1-\varepsilon)\pi $$Combining the above inequalities, it yields: $$(1-\varepsilon)\pi\leq 1$$But, for small $\varepsilon>0$ it fails, thus a contradiction. It is enough to take any $\varepsilon<1-\frac{1}{\pi}$.
03.07.2018 23:39
dgrozev wrote: @v_Enhance: Nice solution. I know that the most important thing in a competition is to find a solution. There is no time to think about some clarifications. You can view the remarks below just as some shared thoughts, not in any way as a criticism. As the organizer of this exam rather than a contestant by now, I am very much happy to hear comments like this one, and am sure the students are happy to see them as well. In particular, I think being pedantic on a problem like this is especially helpful, as it was a land-mine. At heart it really is a probability/analysis problem, which is an uncomfortable topic for most high-school students, and so any clarifications of unmentioned hypotheses can be instructive even if they are "obvious". +1.
30.03.2019 02:26
Is there something wrong with this?(Didn't use EV) Suppose there exists a polygon P s.t. no matter in which direction we move it(by 1/100), the intersection will be at least 1-s. Thus, choose some direction x to move P. Let $Q_i$ be the polygon translating P in direction x by i/100. We see that each time we translate, the area lost is at most s.(We lose $Q_i - (P\cup Q_1\cup Q_2\cup …\cup Q_{i-1})$ which is in $Q_i-Q_{i-1}$ which is area at most s) Thus, after translating less than 1/s times, $Q_i$ will still have some intersection with P. Set $s=1/10^6$, and we get that if we translate P by $10^8$ in any direction, it will still share some area with the original P which is nonsensical.
30.03.2019 12:00
pandadude wrote: ... and we get that if we translate P by $10^8$ in any direction, it will still share some area with the original P which is nonsensical. Why not? Take a very thin annulus $A$ with radius $10^8$, such that it has area $1$. Translating $A$, in any direction by $10^8$ it will still share some area with $A$. Ok, $A$ is not a polygon, but it takes a cosmetic change to make it be - cut a small piece off it, seal the two ends and approximate very well the arcs with a broken line.
31.03.2019 02:46
Oh right! thank you!
01.09.2020 10:44
The first step is to show that one can replace "distance $\tfrac 1{100}$" by "distance at most $\tfrac{1}{50}$". We denote the area of a polygon by $|\cdot|$. For each vector $\vec v$, define $$f(\vec v) = 1-|\mathcal{P}\cap (\mathcal{P} + \vec v)|.$$Claim: $f(\vec u + \vec v) \leq f(\vec u) + f(\vec v)$ for each vectors $\vec u$, $\vec v$. Proof. Direct manipulation: \begin{align*} f(\vec u + \vec v) &= |\mathcal P \setminus (\mathcal P + (\vec u + \vec v))| \\ &= |\mathcal P \setminus (\mathcal P + \vec u)| + |(\mathcal P \cap (\mathcal P + \vec u)) \setminus (\mathcal P + (\vec u + \vec v))| \\ &\leq |\mathcal P \setminus (\mathcal P + \vec u)| + | (\mathcal P + \vec u) \setminus (\mathcal P + (\vec u + \vec v))| \\ &= f(\vec u) + f(\vec v), \end{align*}which gives the claim. $\blacksquare$ Thus, for some $\varepsilon >0$ and vector $\vec u$ with magnitude at most $\tfrac{1}{50}$, if $f(\vec u) > \varepsilon$, we can find vectors $\vec{v_1}$ and $\vec{v_2}$, both having magnitude $\tfrac 1{100}$ and $\vec u = \vec v_1+\vec v_2$. From the claim, $f(\vec{v_1})+f(\vec{v_2}) > \varepsilon$, so one of $f(\vec{v_1})$ and $f(\vec{v_2})$ is greater than $\tfrac{\varepsilon}{2}$. This gives the desired reduction. Now, we begin our main step. The main idea is to prove the following simplified version. Lemma: Let $S$ be a finite subset of $\mathbb Z^+$, and $n\in\mathbb{Z}^+$. Then $$\frac{1}{n}\sum_{a=1}^n |S\setminus (S+a)| \geq \frac{\min \{|S|, n\}}{2},$$that is, if we select the distance $a$ of translation randomly, the expected value of the area it will subtract off $S$ is greater or equal to $\tfrac 12 \min\{|S|,n\}$. Proof. Note that $n$ times LHS is the number of pairs $(i,j)$ which $0<j-i\leq n$, $i\in S$, but $j\notin S$ (call such pairs good). Let $k=\min\{|S|,n\}$ and let $a_1 < a_2 <\hdots <a_k$ be the $k$ greatest elements of $S$. Observe that $(a_1,u)$ is a good pair for at least $n-(k-1)$ choices of $u$. $(a_2,u)$ is a good pair for at least $n-(k-2)$ choices of $u$. $\qquad\qquad\vdots$ $(a_k,u)$ is a good pair for at least $n$ choices of $u$. Hence, there are at least $$nk - \frac{k(k-1)}{2} = \frac{k(2n-k+1)}{2} \geq \frac {nk}2$$good pairs as claimed. $\blacksquare$ As suggested by the lemma, we approximate the polygon by a very large grid. Grid the plane so that a $1\times 1$ square is divided into a $50N\times 50N$ grid (thus $\#\text{cells} = 2500N^2+o(N)$), where $N$ is variable and very large. Call a row or a column weird if it has at most $N$ cells and normal otherwise. We split into three cases. If weird rows have, in total, at least $49N^2$ cells, then by the claim above (and linearity of expectation), we can select $k$ so that, if we translate this polygon rightward by $\tfrac kN$, it will subtract an area of at least $\tfrac{49N^2}{2}$ cells, completing the problem if $\varepsilon < \tfrac{49}{5000}$. If there are at least $49N$ normal rows, by the claim above and linearity of expectation, we can select $k\in\{1,2,\hdots,N\}$ and translate by $\tfrac kN$ so that it subtracts an area of at least $49N\cdot\tfrac{N}{2}$, completing the problem if $\varepsilon < \tfrac{49}{5000}$. Assume that neither of above is the case. In addition, we can also assume that all weird columns have at most $49N^2$ cells in total, and there are at most $49N$ normal columns. Observe that the number of cells in normal columns is at most \begin{align*} 49N\cdot 49N + (\#\text{cells in weird row}) < 2450N^2+o(N^2) \end{align*}which is a contradiction as the number of cells in weird columns is $\leq 49N^2$, while the total number of cells is $2500N^2+o(N)$. Thus, we have solved the problem for $\varepsilon < \tfrac{49}{5000}$, or equivalently $c \geq 1-\tfrac{\varepsilon}{2} > 1- \tfrac{49}{10000}$.
09.06.2021 09:22
This problem is really an analysis problem (as Evan noted above), so it is natural to tackle it using analytic methods. In particular, applying a Fourier transform gives a conceptually simple solution. Compared to the approaches presented above, this solution requires less cleverness at the expense of a bit more "technology." Let $\mathcal P\subseteq\mathbb R^2$ be of unit measure. It is enough to show that the expected size of the intersection of $\mathcal P$ with $\mathcal P + v$, for a uniformly random vector $v$ of length $\delta = 1/100$, is bounded above by some universal constant $c < 1$. The rest of our proof proceeds as follows: We first write this expected value as a cross-correlation. We then apply the Fourier transform to obtain an equivalent expression, but in a more convenient form. Specifically, the "change in perspective" offered by the Fourier transform reduces our original problem to the easier problem of analyzing a particular function. The expected value in question can be written as $\mathbb E_v((\chi_{\mathcal P}\star\chi_{\mathcal P})(v))$, where $\chi_{\mathcal P}$ is the indicator function for $\mathcal P$ (which is $1$ for $x\in\mathcal P$ and $0$ otherwise) and $\star$ is the cross-correlation operator. Define $\phi$ to be the function that takes as input a function $f$ and returns $\mathbb E_v(f(v))$. We may express \[ \mathbb E_v((\chi_{\mathcal P}\star\chi_{\mathcal P})(v)) = \phi(\chi_{\mathcal P}\star\chi_{\mathcal P}). \]You can think of $\phi$ as a ring of Dirac deltas on the circle of radius $\delta$ centered at the origin; $\phi$ is what is known as a tempered distribution. We introduce $\phi$ so that we may take a Fourier transform. Let $\hat f$ denote the Fourier transform of $f$. Applying the definition of the Fourier transform, we get \[ \phi(\chi_{\mathcal P}\star\chi_{\mathcal P}) = \hat\phi(|\widehat{\chi_{\mathcal P}}|^2), \]since $f\star f$ is the Fourier transform of $|\hat f|^2$. (To be perfectly formal, we should really apply the Fourier transform to a "smoothed" version of the continuous function $\chi_{\mathcal P}\star\chi_{\mathcal P}$, since tempered distributions act only on Schwartz functions, but this is a small detail.) We now write $\hat\phi(|\widehat{\chi_{\mathcal P}}|^2)$ in a concrete form: Because $\phi$ is a distribution with compact support, its Fourier transform is a smooth function $\mathbb{R}^2\to\mathbb{R}$. So we can evaluate $\hat\phi$ as a smooth function at $\xi\in\mathbb R^2$. Abusing notation a bit, we have \[ \hat\phi(\xi) = \phi(x\mapsto e^{-2\pi i\langle x, \xi\rangle}) = \phi(x\mapsto e^{-2\pi i \|\xi\|_2x_1 }) = \frac{1}{2\pi}\int_0^{2\pi} e^{-2\pi i \delta\|\xi\|_2\cos\theta}\,d\theta,\qquad(*) \]where the second equality comes from the rotational symmetry of $\phi$. Define \[ J_0(t) = \frac{1}{2\pi} \int_0^{2\pi} e^{it\cos\theta}\,d\theta. \]Then the right-hand side of $(*)$ is $J_0(2\pi \delta\|\xi\|_2)$. Therefore, \[ \hat\phi(|\widehat{\chi_{\mathcal P}}|^2) = \int_{\mathbb R^2} J_0(2\pi \delta\|\xi\|_2) |\widehat{\chi_{\mathcal P}}(\xi)|^2 \,d\xi_1\,d\xi_2. \] The desired result follows if we can upper bound the above integral, which boils down to understanding $J_0$. From the definition of $J_0$, we have $J_0(t)\le 1$, with equality if and only if $t = 0$. Moreover, $J_0(t)\to 0$ as $t\to\infty$: \[ J_0(t) = \frac{4}{2\pi}\operatorname{Re}\int_0^{\pi/2} e^{it\cos\theta}\,d\theta = \frac{4}{2\pi}\operatorname{Re}\int_0^1 e^{itu}\cdot\frac{1}{\sqrt{1 - u^2}}\,du. \]That the right-hand side goes to $0$ as $t\to\infty$ now follows from the Riemann-Lebesgue lemma, since $1/\sqrt{1-u^2}$ is integrable on $[0, 1]$. Thus, there exists a $c' < 1$ such that $J_0(2\pi \delta\|\xi\|_2) \le c'$ for all $\xi$ except for a measure $1/2$ ball around $0$. Because $|\widehat{\chi_{\mathcal P}}|^2$ has $L^1$-norm $1$ and $L^\infty$-norm at most $1$, we may bound \[ \int_{\mathbb R^2} J_0(2\pi \delta\|\xi\|_2) |\widehat{\chi_{\mathcal P}}(\xi)|^2 \,d\xi_1\,d\xi_2\le \frac 12\cdot 1 + \frac 12\cdot c' < 1. \]Our chain of equalities means the same bound applies to $\mathbb E_v((\chi_{\mathcal P}\star\chi_{\mathcal P})(v))$. Remark. One might recognize the function $J_0$ as a Bessel function of the first kind.
02.12.2021 21:39
I don't understand. If you make the polygon a really long but really thin rectangle, and then translate it parallel to the larger side, one can make the area of the intersection go to 1. Can someone please help clarify?
03.12.2021 00:41
The problem says to prove there's a constant $c$ so that there exists a translation of $\frac{1}{100}$ making the shared area less than $c$. For a long, thin rectangle, translating parallel to the shorter side will do this.
29.03.2023 09:29
I am not sure if this is valid but we show $c=0.99$ works. It should work but I would be grateful if anyone could find a potential pitfall. Consider two parallel lines that contain the polygon and move them closer until they intersect the polygon at one point each. Repeat this with a pair of parallel lines perpendicular to the earlier ones. The distance between one pair must be at least $1$ due to the area condition. WLOG, these are $x=0$ and $x=k$. For each $0\le t\le k$, mark the point having the highest $y$-coordinate among points of the form $(t,y)$. We claim that translating along the $y$-axis upwards works. This follows by Cavalieri's Principle since the "new" area is at least $k/100 \ge 0.01$ (here it can also be viewed as adding the areas of parallelograms with bases all of equal to $0.01$ and sum of heights $=k\ge 1$).
29.03.2023 10:08
Unfortunately, it doesn't imply the "new" area is at least 0.01, because the polygon may not be convex. Let the point with highest $y$ coordinate be $y(t).$ Your conclusion assumes that all the points between $(t,y(t))$ and $(t, y(t)+0.01)$ are "new", that is, they are hit by a translated point of $\mathcal{P}.$ It's not true even if $\mathcal{P}$ is convex, for example if it is very narrow.
29.03.2023 11:04
dgrozev wrote: Unfortunately, it doesn't imply the "new" area is at least 0.01, because the polygon may not be convex. Let the point with highest $y$ coordinate be $y(t).$ Your conclusion assumes that all the points between $(t,y(t))$ and $(t, y(t)+0.01)$ are "new", that is, they are hit by a translated point of $\mathcal{P}.$ It's not true even if $\mathcal{P}$ is convex, for example if it is very narrow. Could you give an example? I wrote the "highest $y$-coordinate" thing to avoid the intersection of points translated upwards. If there was a point of $\mathcal P$ above the chosen point, it would have been the chosen point. I have tested many different shapes and it would be helpful if you can show one. EDIT: Maybe I get you, there can be regions of $\mathcal P$ that are thin ($<0.01$ in thickness) so that the translated piece has smaller area than estimated. Interestingly, this means they do not intersect $\mathcal P$ (which superficially looks good). I am leaving the original proof as it is until we can find some conclusive evidence which shows the approach cannot we fixed. It might help later solvers. EDIT2: How about defining "areal efficiency" which is the ratio of new area to original area of a region in $\mathcal P$? For thin regions, this is $100\%$ and for thick regions it is at least $1\%$ because the $x$-direction thickness of $\mathcal P$ cannot exceed $1$ while that of the translated portion is $0.01$. Hopefully this could help... EDIT3: I just realized the last line didn't make sense. Oh well for one problem.
16.04.2023 07:38
I'm five years late, but here's a cleaned up version of the solution that I submitted at the time. It proves the stronger claim that a random translation works in expectation, though posts #4 and #14 do this as well. It can be thought of as a hybrid of posts #4 and #12. Let $[-]$ denote area. Scale up so that $[\mathcal{P}] = 10^4$, the translation is of distance $1$, and we want to find to find some $c < 10^4$. Let $\mathbb{T}$ be the unit circle, $B_\delta(p)$ be the open ball of radius $\delta$ around a point $p \in \mathbb{R}^2$, and $A_\delta(p) = [\mathcal{P} \cap B_\delta(p)]$. Finally, let $\Delta\colon \mathbb{R}^2 \times \mathbb{R}^2 \to \mathbb{R}$ be such that $\Delta(x,y) = 1$ if exactly one of $x,y$ is in $\mathbb{P}$, and $\Delta(x,y) = 0$ otherwise. Note that $\Delta(x,z) \leq \Delta(x,y) + \Delta(x,z)$. Observe that for a given $r \in \mathbb{T}$, $\int_{\mathbb{R}^2} \Delta(y, y+r)\,dy = 2[\mathcal{P} \setminus (\mathcal{P} + r)]$, so by averaging over all $\mathbb{T}$ it suffices to show that $\int_{\mathbb{R}^2 \times \mathbb{T}} \Delta(y,y+r)\,dy\,dr$ is bounded below by some positive absolute constant. Now consider three points $p_1, p_2, q$ that form an equilateral triangle of side length $1$ in the plane. Consider the map $F \colon B_{1/2}(q) \times \mathbb{T} \times \mathbb{T} \to \mathbb{R}^2 \times \mathbb{R}^2$ given by $F(y,r_1,r_2) = (y+r_1,y+r_2)$. Note that $F(q,p_1-q,p_2-q) = (p_1,p_2)$, and moreover that the differential $dF$ is invertible at this point. Thus, by the inverse function theorem, there is a neighborhood $U$ of $(p_1,p_2)$ (which we will take to be $B_\delta(p_1) \times B_\delta(p_2)$ for some $\delta < 1/2$) on which an inverse $G = (G_0,G_1,G_2)$ of $F$ exists. We may now deduce that \begin{align*} \MoveEqLeft A_\delta(p_1)(\pi \delta^2 - A_\delta(p_2)) + A_\delta(p_2)(\pi \delta^2 - A_\delta(p_1)) \\ &= \int_{U} \Delta(x_1, x_2)\,dx_1 \,dx_2 \\ &\leq \int_{U} \Delta(G_0(x_1,x_2), x_1) + \Delta(G_0(x_1,x_2),x_2)\,dx_1\,dx_2 \\ &= \int_{G(U)} (\Delta(y, y+r_1) + \Delta(y, y+r_2)) \lvert \det dF(y,r_1,r_2) \rvert \,dy\,dr_1\,dr_2 \\ &\leq \max_{y,r_1,r_2} \lvert \det dF(y,r_1,r_2) \rvert \int_{B_{1/2}(q) \times \mathbb{T} \times \mathbb{T}} \Delta(y, y+r_1) + \Delta(y, y+r_2) \,dy\,dr_1\,dr_2 \\ &= 4\pi \max_{y,r_1,r_2} \lvert \det dF(y,r_1,r_2) \rvert \int_{B_{1/2}(q) \times \mathbb{T}} \Delta(y, y+r) \,dy\,dr. \end{align*}It's easy to see that $\max_{y,r_1,r_2} \lvert \det dF(y,r_1,r_2) \rvert$ is finite. Also, since everything we did was translation-invariant, we conclude the following: Lemma. There exist absolute constants $C > 0$ and $0 < \delta < 1/2$ such that if $p_1,p_2,q$ form an equilateral triangle of side length $1$, \[\int_{B_{1/2}(q) \times \mathbb{T}} \Delta(y, y+r) \,dy\,dr \geq C(A_\delta(p_1)(\pi \delta^2 - A_\delta(p_2)) + A_\delta(p_2)(\pi \delta^2 - A_\delta(p_1))).\] The finish from here is relatively easy. First, consider a random translation of $\mathcal{P}$ in $[0,1]^2$. Then, the expected value of $\sum_{p\in \mathbb{Z}^2} A_{\delta}(p)$ is $10^4 \pi \delta^2$, so we will choose some translation so that this value is exceeded. There are now two cases: Case 1. There is some $p \in \mathbb{Z}^2$ such that $A_\delta(p) \geq \delta^2$. In this case, since $A_\delta(p') = 0$ for $p'$ far away from the origin, we may find $p, p' \in \mathbb{Z}^2$ at distance $1$ such that $A_\delta(p) \geq \delta^2$ and $A_\delta(p') \leq \delta^2$. Then, by applying the Lemma, we may find a $q$ such that \[\int_{B_{1/2}(q) \times \mathbb{T}} \Delta(y, y+r) \,dy\,dr \geq C\delta^4.\] Case 2. For all $p \in \mathbb{Z}^2$, $A_\delta(p) \leq \delta^2$. In this case, we have that for all $p_1,p_2 \in \mathbb{Z^2}$ \[C(A_\delta(p_1)(\pi \delta^2 - A_\delta(p_2)) + A_\delta(p_2)(\pi \delta^2 - A_\delta(p_1))) \geq C\delta^2 A_\delta(p_1).\]Therefore, by the Lemma applied to triangles of the form $p, p+(1,0), p+(\tfrac{1}{2},\tfrac{\sqrt{3}}{2})$ we find that \begin{align*} \int_{\mathbb{R}^2 \times \mathbb{T}} \Delta(y, y+r)\,dy\,dr &\geq \sum_{p \in \mathbb{Z}^2}\int_{B_{1/2}(p + (\frac{1}{2}, \frac{\sqrt{3}}{2})) \times \mathbb{T}} \Delta(y, y+r) \,dy\,dr \\ &\geq \sum_{p \in \mathbb{Z}^2} C\delta^2 A_\delta(p) \\ &\geq 10^4\pi C\delta^4. \end{align*}This concludes the proof. Note that by scaling back we get $c = 1 - \Theta(10^{-4})$, though if one is more careful with the casework we can get $1 - \Theta(10^{-2})$ as well.
22.09.2023 23:34
Solved using 10% ARCH hint. Nice problem, but somewhat hard for a TSTST problem as this is somewhat topological. However, using the scaling as in MarkBcc's solution, this can be avoided and be viewed as a more down-to-earth problem. For any vector $\vec v$ and polygon $\mathcal{P}$, let $f(\mathcal{P}, \vec v)$ denote $[\mathcal{P} \cap (\mathcal{P}+\vec v)]$. Fix $\mathcal{P}$. We want a lower bound on $\varepsilon$ less than $1$ such that $f(\mathcal{P}, \vec v) \le \varepsilon$ for some $\vec v$ of magnitude $\tfrac{1}{100}$. Claim: [additive bound] For vectors $\vec u$ and $\vec v$ and polygon $\mathcal{P}$, \[ f(\mathcal{P}, \vec u + \vec v) \ge f(\mathcal{P}, \vec u) + f(\mathcal{P}, \vec v) - [\mathcal{P}]. \]Proof. Note that \[ f(\mathcal{P}, \vec u + \vec v) = [\mathcal{P} \cap (\mathcal{P}+ \vec u + \vec v)] \ge [\mathcal{P}] - [\mathcal{P} \setminus (\mathcal{P} + \vec u)] - [\mathcal{P} \setminus (\mathcal{P} + \vec v)] = f(\mathcal{P}, \vec u) + f(\mathcal{P}, \vec v) - [\mathcal{P}], \]as desired. In fact, inductively applying the claim returns \[ f(\mathcal{P}, \vec v_1+\dots+ \vec v_n) \ge 1-n\varepsilon. \]Now we look at the magnitude of $\vec v_1+\dots+ \vec v_n$, and make this magnitude a rigid quantity, and note that we can choose such $\vec v_k$ such that their sum has that magnitude. Fix $||\vec u||=1$. By the generalized claim, we have \[ f(\mathcal{P}, \vec u) \ge 1-100\varepsilon. \] Now we use the following bounding approach to obtain a lower bound on $\epsilon$. Let $x$ be a variable point in $\mathcal{P}$, and let \[ \Delta(x) = [\mathcal{P} \cap \{\vec v: ||\vec v - \vec x|| \le 1\}].\]We consider \[ \frac{1}{[\mathcal{P}]} \iint_{\mathcal{P}} \Delta(x) \ dA. \]Note that this is at least $f(\mathcal{P}, \vec u) \ge 1-100\varepsilon$ and at most $[\mathcal{P}]/(1^2 \cdot \pi)=\pi^{-1}$, so \[ 1-100\varepsilon \le \frac{1}{\pi}. \]Thus, $\varepsilon \ge 0.01\left(1-\tfrac{1}{\pi}\right)$, which is enough.
09.07.2024 03:00
Let $X$ be some uniformly random point in $\mathcal P$, and $v$ some uniformly randomly vector of magnitude $1$. Then clearly Prob$(X + v \in \mathcal P) = \mathbb{E}[\mathcal P \cap \mathcal Q]$, by linearity of expectation. If $\mathbb{E}[\mathcal P \cap \mathcal Q] = c$ for some fixed $c$ then clearly there exists some vector so that $\mathcal P + v = \mathcal Q$ so that $\mathcal P \cap \mathcal Q$ has area less than or equal to $c$, which is not desirable. Thus, we wish to show that if Prob$(X + v \in \mathcal P) = 1 - \rho$, then $\rho$ cannot get arbitrarily close to $0$(has a lower nonzero bound). Let $N$ be some sufficiently large integer so that the set of points that are $\frac{N}{100}$ from $\mathcal P$ is disjoint from $\mathcal P$. We will double count the probability of $X + v_1 + v_2 + \dots + v_i \in \mathcal P$ for randomly chosen unit vectors $v_i$, for $i = 1, 2, \dots, N$. We first count the probability by first fixing $X$ and choosing $v_i$. The probability of $X + v_1 + \dots + v_k$ staying inside $\mathcal P$ for all $i = 1, 2, \dots, k$ but going out of $i = k+1$ is equal to $(1 - \rho)^k \cdot \rho$. Summing over all $i$ gives us a complementary probability of Prob$(X + v_1 + \dots + v_i \not\in \mathcal{P} \forall i) = \sum_{j=1}^{N} (1-\rho)^j\rho$. Then $\sum_{j=1}^{N} (1-\rho)^j\rho \leq N\rho$, so our lower bound for our desired probability is $1 - N\rho$. Now, first fix $v_i$ and randomly choose $X$. The locus of points $X + \sum_i v_i$ is given by the set of points at most $\frac{N}{100}$ away from any random point $X \in \mathcal P$, which we will call $\mathcal R$. Then the probability of $X + \sum v_i$ landing in $\mathcal P$ is given by at least $\frac{[\mathcal P]}{[\mathcal R]}$ by probability density. However $\frac{[\mathcal P]}{[\mathcal R]}$ is at least $\frac{1}{\frac{N^2\pi}{100}}$. So now we get the bound $1 - N\rho \leq \frac{1}{\frac{N^2\pi}{100}}$ which rewrites to $\frac{1 - \frac{100}{N^2\pi}}{N} \leq \rho$. Clearly the LHS is $< 1$ by $N$ being sufficiently large, so we are finished as we can find an upper bound on $1 - p$.
14.09.2024 23:39
Let $\mathcal P (x, y)$ be equal to $1$ if $(x, y)$ is inside $\mathcal P$ and $0$ otherwise. Let \[S = \int_{0}^{100} \int_{0}^{100} \int_{-\infty}^\infty \int_{-\infty}^\infty \mathcal P(x, y)\mathcal P(x+a, y+b) dy dx db da.\]Then, \begin{align*} S &= \int_{-\infty}^\infty \int_{-\infty}^\infty \mathcal P(x, y) \int_{0}^{100} \int_{0}^{100} \mathcal P(x+a, y+b) db da dy dx\\ &\le \int_{-\infty}^\infty \int_{-\infty}^\infty \mathcal P(x, y)\\ &= 1. \end{align*}So, there exists $a, b \in (0, 100)$ such that \[\int_{-\infty}^\infty \int_{-\infty}^\infty \mathcal P(x, y)\mathcal P(x+a, y+b) dy dx\le \frac{1}{10^4}.\]So, if $\mathcal R$ is $\mathcal P$ shifted by $(-a, -b)$, then $[\mathcal P \cap \mathcal R] \le \frac{1}{10^4}$. Let $N = 2\cdot 10^4$ and $c = 1-\frac{1-\frac{1}{10^4}}{N}$. Claim: There is a way to shift $\mathcal{P}$ by $\frac 1{100}$ such that the intersection of $\mathcal P$ and the new shape has area at most $c$. Proof: Assume toward a contradiction this is false. Let $(0, 0) = (a_0, b_0), (a_1, b_1),\dots, (a_N, b_N) = (a, b)$ be points such that the distance from $(a_i, b_i)$ to $(a_{i+1}, b_{i+1})$ is $\frac 1{100}$. Then, \begin{align*} [\mathcal P - (a_i, b_i)\cap \mathcal P - (a_{i+1}, b_{i+1})] &> c\\ [\mathcal P - (a_i, b_i)\setminus \mathcal P - (a_{i+1}, b_{i+1})] &< \frac{1-\frac{1}{10^4}}{N}\\ \sum_{i = 0}^{N-1} [\mathcal P - (a_i, b_i)\setminus \mathcal P - (a_{i+1}, b_{i+1})] &< 1-\frac{1}{10^4}\\ [\mathcal P \setminus \mathcal P - (a, b)] &< 1-\frac{1}{10^4}\\ [\mathcal P \cap \mathcal R] &> \frac{1}{10^4}, \end{align*}a contradiction.