Alice and Bob play a game. Bob starts by picking a set $S$ consisting of $M$ vectors of length $n$ with entries either $0$ or $1$. Alice picks a sequence of numbers $y_1\le y_2\le\dots\le y_n$ from the interval $[0,1]$, and a choice of real numbers $x_1,x_2\dots,x_n\in \mathbb{R}$. Bob wins if he can pick a vector $(z_1,z_2,\dots,z_n)\in S$ such that $$\sum_{i=1}^n x_iy_i\le \sum_{i=1}^n x_iz_i,$$otherwise Alice wins. Determine the minimum value of $M$ so that Bob can guarantee a win. Proposed by DVDthe1st
Problem
Source: SMO open 2024 Q4
Tags: inequalities, Sequence
sarjinius
07.08.2024 07:51
$\boxed{n + 1}$
Let $\mathbf{v_0} = (0, \dots, 0, 0, 0), \mathbf{v_1} = (0, \dots, 0, 0, 1), \mathbf{v_2} = (0, \dots, 0, 1, 1), \dots, \mathbf{v_n} = (1, \dots, 1, 1, 1)$ and $T = \{\mathbf{v_0}, \mathbf{v_1}, \dots, \mathbf{v_n}\}$.
Claim 1: For Bob to win, we must have $T \subseteq S$.
Proof: For any $0 \le i \le n$, Alice can pick $x_1 = x_2 = \dots = x_{n-i} = -1$, $x_{n-i+1} = x_{n-i+2} = \dots = x_n = 1$, and $\mathbf{y} = \mathbf{v_i}$. If we let $\mathbf{x} = (x_1, x_2, \dots, x_n)$, for any $\mathbf{z} \in \{0, 1\}^n$, we have $\mathbf{x} \cdot \mathbf{y} = i \ge \mathbf{x} \cdot \mathbf{z}$, with equality if and only if $\mathbf{z} = \mathbf{y}$. Hence $\mathbf{v_i}$ must be included in $S$ for Bob to win.
Claim 2: Bob can guarantee a win with $S = T$.
Proof: Note that for any $\mathbf{x}$ and $\mathbf{y}$ Alice chooses, we have $\mathbf{x} \cdot \mathbf{y} = \mathbf{x} \cdot (y_1\mathbf{v_n} + (y_2 - y_1)\mathbf{v_{n-1}} + \dots + (y_n - y_{n-1})\mathbf{v_1}) = y_1(\mathbf{x} \cdot \mathbf{v_n}) + (y_2 - y_1)(\mathbf{x} \cdot \mathbf{v_{n-1}}) + \dots + (y_n - y_{n-1})(\mathbf{x} \cdot \mathbf{v_1})$.
If $\mathbf{x} \cdot \mathbf{v_1}, \mathbf{x} \cdot \mathbf{v_2}, \dots, \mathbf{x} \cdot \mathbf{v_n} \le 0$, then $\mathbf{x} \cdot \mathbf{y} \le 0$, and Bob can pick $\mathbf{v_0}$ to win, since $\mathbf{x} \cdot \mathbf{v_0} = 0$. Otherwise, Bob can pick $\mathbf{v_i}$ ($1 \le i \le n$) such that $\mathbf{x} \cdot \mathbf{v_i}$ is maximized ($\mathbf{x} \cdot \mathbf{v_i} > 0$). Then we have
\begin{align*}
\mathbf{x} \cdot \mathbf{y} &= y_1(\mathbf{x} \cdot \mathbf{v_n}) + (y_2 - y_1)(\mathbf{x} \cdot \mathbf{v_{n-1}}) + \dots + (y_n - y_{n-1})(\mathbf{x} \cdot \mathbf{v_1}) \\ &\le y_1(\mathbf{x} \cdot \mathbf{v_i}) + (y_2 - y_1)(\mathbf{x} \cdot \mathbf{v_i}) + \dots + (y_n - y_{n-1})(\mathbf{x} \cdot \mathbf{v_i}) \\
&= y_n(\mathbf{x} \cdot \mathbf{v_i}) \le \mathbf{x} \cdot \mathbf{v_i}.
\end{align*}So Bob wins by picking $\mathbf{v_i}$.
By Claim 1, Bob needs at least $n + 1$ vectors to win, and by Claim 2, Bob can guarantee a win with $n + 1$ vectors. Therefore, the minimum value of $M$ such that Bob can guarantee a win is $\boxed{n + 1}$. $\Box$
jp62
25.08.2024 20:50
$M=n+1$.
Bob can win with the set of vectors $\mathbf v_i=(\underbrace{0,\dots,0}_i,\underbrace{1,\dots,1}_{n-i})$ for $0\leq i\leq n$.
This is because the $\mathbf y$ vector is a convex combination of $\mathbf v_i$, i.e. a weighted average with weights in $[0,1]$. In particular the $\mathbf y$ vector can be expressed as $\sum_{0\leq i\leq n}(y_{i+1}-y_i)\mathbf v_i$ setting $y_0=0$ and $y_{n+1}=1$.
Therefore by the linearity of the dot product, $LHS=\mathbf x\cdot \mathbf y$ is a convex combination of $\mathbf x\cdot \mathbf v_i$, which shows that for some $i$, $\mathbf x\cdot\mathbf v_i\geq\mathbf x\cdot \mathbf y$. Choosing $\mathbf z=\mathbf v_i$ works.
Suppose $\mathbf v_i=(\underbrace{0,\dots,0}_i,\underbrace{1,\dots,1}_{n-i})\not\in S$.
Let Alice pick $\mathbf x=(\underbrace{-1,\dots,-1}_i,\underbrace{1,\dots,1}_{n-i})$. (Be careful not to use zeros here!)
Then the product $\mathbf x\cdot \mathbf y$ is uniquely maximised at $\mathbf y=\mathbf v_i$ (decreasing any of the first $i$ terms strictly decreases the value, and increasing any of the last $n-i$ terms strictly increases the value).
However, the vector $\mathbf z=\mathbf v_i$ is not available to Bob, yet $\mathbf y=\mathbf v_i$ is available to Alice, and thus Bob cannot win.
Observe the complete lack of anything other than vectors and dot products, apart from forcing $\mathbf y$ into its more natural vector form.
After seeing that the possible space of $\mathbf y$ vectors is a simplex in $n$-dimensional space, all that remains is to find its $n+1$ vertices and express everything as a convex combination of these.
Both parts follow from the maxima being attained at one of these vertices, and in the necessary case we must ensure that this is the unique maximum.
DVDthe1st
22.10.2024 18:09
This problem comes from a trick you need for a learning-theoretic complexity bound. More details in this blogpost.