Nina and Tadashi play the following game. Initially, a triple $(a, b, c)$ of nonnegative integers with $a+b+c=2021$ is written on a blackboard. Nina and Tadashi then take moves in turn, with Nina first. A player making a move chooses a positive integer $k$ and one of the three entries on the board; then the player increases the chosen entry by $k$ and decreases the other two entries by $k$. A player loses if, on their turn, some entry on the board becomes negative. Find the number of initial triples $(a, b, c)$ for which Tadashi has a winning strategy.
Problem
Source:
Tags: combinatorics
13.03.2021 05:47
17.03.2021 21:14
I wrote this problem. I believe the following extension holds, though I don't know of a proof. Would be interested if anyone finds one!
18.03.2021 05:24
The rest of my solution is the same as superpi's, and here is an alternate way to show if at least one of $x\&y$, $y\&z$, or $z\&x$ is nonzero, then $x\ge (x+y)\&(x+z), y\ge (y+x)\&(y+z), z\ge (z+x)\&(z+y)$ cannot hold simultaneously. Let $(a,b,c)$ be the triple with $a+b+c$ minimal, at least two over them overlap at one bit. WLOG, $a>b>c$ (if two are equal, we're trivially done). Note $a+c, b+c>2c$. Therefore, there are bits present in $a+c$ that are too large for $c$. Note $a+c$ and $b+c$ must share none of those. Let $k$ be the largest bit $c$ has and $l$ the largest bit $a+c$ has. I claim $2^l\le a,a+c,a+b<2^{l+1}$. Note $a+b>a+c\ge 2^l$. Also, $a+b=(a+c)+(b+c)-2c \le \sum\limits_{i=k+1}^{l} 2^i + 2\sum\limits_{i=0}^k 2^i - 2\cdot 2^k$. The $\sum\limits_{i=k+1}^{l} 2^i$ part comes from that at most one of $a+c$, $b+c$ can have a bit $2^m$ with $m>k$. Note this is $2^{l+1}-2^{k+1} + 2(2^{k+1}-1) - 2^{k+1} = 2^{l+1}-2$. It follows that $a+b$ and $a+c$ both have the $2^l$ bit on. Therefore, $a$ must have the $l$ bit on. Consider $(a-2^l,b,c)$. This simply removes the leading bit of $a,a+b,a+c$. The overlapping bit clearly still exists, since only $a$ has the bit $2^l$. Therefore, $(a-2^l,b,c)$ has the same conditions $(a,b,c)$ has, contradicting $(a,b,c)$'s minimal sum.
11.07.2021 15:07
Got occupied with this for almost 4-5 days. Reviewing back on how I did it, it's still not really clear whether or not more experimentation implies more clarity. Here's an implicit Solution (solved with the assistance of a senior from IMO 2015-2016) that works from the back of $a,b,c$ --- contrasting from the explicit construction that proves its validity from the front of $a,b,c$ (front meaning $\textsf{consider leftmost overlapping digits}$ or $\textsf{smallest sum which betrays the operator}$, and $\textsf{remove leftmost bit}$). $\rule{7.8cm}{2pt}$ $\clubsuit$ $\boxed{\textbf{Grand Claim and Preliminary Result.}}$ $\clubsuit$ $\rule{7.8cm}{2pt}$ Write $a,b,c$ in their binary form. In this Solution, we won't speak of $a,b,c$ decimally. Then, the triple $(a,b,c)$ is a $\textbf{\textit{losing position}}$
if and only if $a+b+c$ has no carry when the addition is performed in binary. $\rule{25cm}{0.2pt}$ $\spadesuit$ $\boxed{\textbf{Partial Proof.}}$ $\spadesuit$ Here, we prove that every losing position must be succeeded by a winning one. For brevity, call the $l+1^{\text{th}}$ digit (from the right) of the numbers $a,b,c$ to be the $2^l$ base. By the Claim, the losing position in consideration must have at most one $\textbf{ON}$ --- or one-valued digit --- in every $2^l$ base. Now consider the rightmost one-valued digit of $k$'s position. Call that the $2^d$ base. Consequently, all the three $2^d$ digits in $a,b,c$ will be flipped to its opposite when they are $\textit{being worked on}$ by adding or substracting $k$. As a result, the new numbers $a',b',c'$ will form a winning position, since there are at least two $\textbf{ON}$s in the $2^d$ base, making the addition $a+b+c$ carry on the $2^d$ base. The first part of the proof is done.
We will prove the second part, emulating and adapting the ideas used to prove the first part. We start with an algorithm to possibly convert crowded $a,b,c$ to a hopefully more scattered position. $\color{green} \rule{5.7cm}{2pt}$ $\color{green} \clubsuit$ $\boxed{\textbf{Describing An Algorithm.}}$ $\color{green} \clubsuit$ $\color{green} \rule{5.7cm}{2pt}$ Fix an $(a,b,c)$ which is a W. Call $a,b,c$ to be the $\textit{left number}$, $\textit{middle number}$ and $\textit{right number}$, respectively. From there, define the algorithm $\oplus_L$ as follows: the algorithm intends to add a number $k$ to the $\textit{left number}$ $a$, and substract $k$ from both $b$ and $c$ (the non-left-numbers); it will scan the rightmost base (a.k.a. the $2^0$ base and see how many $\textbf{ON}$s it has. If it has two or three ONs, it will add one to $a$ and substract one from $b,c$, with carries if necessary. Otherwise, it will do nothing. The $2^0$ base is then demolished to make room for the $2^{i+1}$ base to move one space to the right, becoming the $2^i$ base. Repeat this until one of $b,c$ turns negative or all bases have one or less ONs. Finally, if the latter condition is achieved: retrieve the original $a,b,c$ and form $k$ to be the total value of the added bases. Note that if a valid $k$ can be formed, then the algorithm will output $a',b',c'$ all nonnegative, with every $2^l$ base of $a',b',c'$ having at most one ON. This will render $(a',b',c')$ to be a $\color{green} \textbf{\textit{losing position}}$. Define $\oplus_M$ and $\oplus_R$ similarly. In this section, we Claim that the algorithm terminates, without indicating whether or not it forms a valid $k$. $\color{green} \rule{25cm}{0.2pt}$ $\color{green} \spadesuit$ $\boxed{\textbf{Proof.}}$ $\color{green} \spadesuit$ Indeed, let $b$ has $d_b$ digits. Then, the algorithm $\oplus_{L}$ can only last at most \[ d_b \sim \log_2{(b)}+1 \]steps, before making $b'$ into a negative number if $\oplus_L$ operates on the $2^{d_b}$ base or beyond. $\blacksquare$ Next, we note that the algorithm can $\color{red} \textbf{\textit{be expanded}}$ to losing positions: and in fact, it will yield a valid $k = 0$! So, we can loosen the criteria of $(a,b,c)$ needing to form a W and instead focus on $\oplus_L$'s behavior. Define the assertion $(a,b,c; P_L)$ to represent the state when a valid $k$ can be formed with $\oplus_L$. Generalizing a bit, the problem can be restated as following: $\color{cyan} \rule{5.95cm}{2pt}$ $\color{cyan} \diamondsuit$ $\boxed{\textbf{A More Inclusive Phrasing.}}$ $\color{cyan} \diamondsuit$ $\color{cyan} \rule{5.95cm}{2pt}$ Problem wrote: Prove that in $(a,b,c)$ Win, either $(a,b,c;P_L)$, $(a,b,c;P_M)$ or $(a,b,c;P_R)$ is true, for some $k \ne 0$ (so the Win can be transferred to a Lose by a valid $k \ne 0$ move). Betterly Visualized Statement of Problem wrote: Prove that for every $(a,b,c)$, $(a,b,c;P_L)$, $(a,b,c;P_M)$ or $(a,b,c;P_R)$ is true ($k$ may be $0$ for a Lose). Note that no extra work is necessary to transfer the second statement to the first: obviously, $k = 0$ will not work when attempting to transfer a W to a L --- so $k \ne 0$ must hold. $\color{cyan} \rule{25cm}{0.2pt}$ We generalize further by adding some additional must-be-true assertions: $\color{magenta} \rule{9.4cm}{2pt}$ $\color{yellow} \clubsuit$ $\boxed{\textbf{Diverse Family of Inter-Supportive Assertions.}}$ $\color{yellow} \clubsuit$ $\color{magenta} \rule{9.4cm}{2pt}$ For every $(a,b,c) \in (\mathbb{N}_0)^3$, all of these holds: $(a,b,c;P_L) \vee (a,b,c;P_M) \vee (a,b,c;P_R)$ is true; $(a+1,b,c;P_L) \vee (a,b+1,c;P_M) \vee (a,b,c+1;P_R)$ is true; and $(a+1,b-1,c;P_L) \vee (a-1,b+1,c;P_M) \vee (a,b,c+1;P_R)$ is true.
$\color{yellow} \rule{25cm}{0.2pt}$ $\color{yellow} \spadesuit$ $\boxed{\textbf{Proof.}}$ $\color{yellow} \spadesuit$ Call the first, second and third assertions to be $(\bigstar_1)$, $(\bigstar_2)$ and $(\bigstar_3)$ for writeup ease. We first get on $\textsf{why this works}$ before tackling the annoying base cases (when either $a,b,c$ equals zero and cannot be inducted downwards.) The general gist of the proof hinges on moving front-wards one space, and seeing the effect that $\oplus_L, \oplus_M, \oplus_R$ has on the remaining chunk --- and prove that the remaining chunk is of a $\textsf{similar shape}$ with respect to the original block. In that spirit, write the original triple as $(A,B,C)$ (bigger chunk) and assume that all three are natural numbers. Further, write them as \[ A = \overline{al_a}, B = \overline{bl_b}, C = \overline{cl_c} \]with $l_a,l_b,l_c \in \{0,1\}$ the last digits of $A,B,C$.
so, as long as $A,B,C \geq 1$, we can perform this reduction over and over again, possibly altering between differing $\bigstar_i$. As the process have been shown to terminate by $\color{green} \clubsuit$ $\boxed{\textbf{Describing An Algorithm}}$ $\color{green} \clubsuit$, then at some time $(a,b,c)$ must contain a zero: where the reduction cannot be performed in one of the 13 prescribed ways.
We are done. $\blacksquare$ $\blacksquare$ $\blacksquare$
31.08.2021 13:41
Why is this still Nim? This is equivalent to
31.08.2021 14:43
superpi83 wrote: I wrote this problem. I believe the following extension holds, though I don't know of a proof. Would be interested if anyone finds one!
According to my solution, this is unlikely to be the correct generalization.