A polynomial $P$ in $n$ variables and real coefficients is called magical if $P(\mathbb{N}^n)\subset \mathbb{N}$, and moreover the map $P: \mathbb{N}^n \to \mathbb{N}$ is a bijection. Prove that for all positive integers $n$, there are at least \[n!\cdot (C(n)-C(n-1))\]magical polynomials, where $C(n)$ is the $n$-th Catalan number. Here $\mathbb{N}=\{0,1,2,\dots\}$.
Problem
Source: 2019 Israel Olympic Revenge P1
Tags: algebra, polynomial
17.06.2022 01:39
First we build a sequence of bijective polynomials $p_n, n\geq 2$ with $p_n,$ having $n$ variables. Define an order relation on $n$-tuples of natural numbers $(x_1,\dots, x_n)$. We say $(x_1,\dots, x_n)<(y_1, \dots, y_n)$ if either $\sum_{i=1}^n x_i<\sum_{i=1}^n y_i$, or $\sum_{i}x_i=\sum_i y_i$ and if $i$ is the first difference between $x$ and $y$, we have $y_i<x_i$. I claim that the function that assigns to each $(x_1,\dots,x_n)$ the number of $n$-tuples smaller than it is a polynomial, which will be named $p_n$. Notice that the function which counts the number of tuples $(t_1,\dots, t_n)$ of naturals with sum not greater than some $k$ is a polynomial in $k$ (namely, it is $\binom{n+k}{n}$). Thus the number of ordered tuples with sum less than $\sum_{i}x_i$ is a polynomial in the $x_i$. We now need to count the number of $(y_1,\dots, y_n)$ which are smaller than $(x_1, \dots, x_n)$ and have the same sum. This is a sum of $n$ functions, according to the first index in which $y_i\neq x_i$. The number of options for $\forall j<i:y_j=x_j, y_i>x_i$, with $\sum_{i=1}^n y_i=\sum_{i=1}^n x_i$, is exactly the number of options to choose $y_{i+1}, \dots, y_n$ so that \[ \sum_{j=i+1}^n y_j<\sum_{j=i+1}^n x_j\]which is a polynomial. In all we added $n+1$ polynomials to get $p_n$ so it is a polynomial. Obviously this is a bijective polynomial. Now notice that we can compose these $p_n$: for example, $p_2(p_3(x,y,z),p_2(w,r))$ is a bijective polynomial in $5$ variables. I claim that all of these compositions are pairwise distinct. To shorten notation, instead of $p_n$ we write $p$, and the index can be recovered from the number of variables used. Assume \[ p(*_1, *_2, \dots, *_k)=p(\$_1,\$_2, \dots, \$_m )\quad (1)\]where each $*, \$$ is either a single variable or a composition of $p_i$'s. We have $k,m\geq 2$ because $p_n$ is only defined for $n\geq 2$. For each $n$ notice that $p_n(1,0,\dots, 0)=1$, as only $(0,0,\dots,0)$ is smaller than this. In a similar manner $p(0,0,\dots, 1, \dots, 0)=i$, where the 1 is in the $i$'th spot. Sub values in all variables in (1) so that the LHS is 1. Then $*_1=1, *_i=0$ for all $i>1$. The RHS is 1 too, thus $\$_1=1, \$_i=0$ for $i>1$. By $*_i=0$ for all $i>1$, all variables in these polynomials must be 0. Thus the rightmost variable in $*_1$ equals the rightmost variable in $\$_1$. Now sub variables so that both sides are 2. Again, the implies that the rightmost variable in $*_2$ equals the rightmost variable in $\$_2$. We can continue this until $\min(m,k)$. We now claim that $m=k$. Assume otherwise, WLOG $m>k$. There are two cases; $m=k+1$ and $m\geq k+2$. Case 1: $m\geq k+2$. In this case sub variables in $(1)$ so that both sides are $k+2$. Then \[(\$_1,\$_2, \dots, \$_m )=(0,0,\dots, 1, \dots,0)\]where the 1 is in the $k+2$-nd position. Also, \[(*_1, *_2, \dots, *_k)=(1,1,0,\dots,0)\]The first equation implies that only one variable can be nonzero (the rightmost variable in $\$_{k+2}$). But this is a contradiction with the second equation. Case 1: $m= k+1$. Again sub variables in $(1)$ so that both sides are $k+2$. This time we get \[(\$_1,\$_2, \dots, \$_m )=(2,0, \dots,0)\]\[(*_1, *_2, \dots, *_k)=(1,1,0,\dots,0)\]As the preimage of 2 under any $p_n$ is always $(0,1,0,\dots)$, the priemage under any composition only has one non-zero variable. Thus $\$_1$ has exactly one nonzero variable and all other variables are 0. But this is again a contradiction to the second equation. Thus $k=m$. As $p_m$ is injective, we may cancel the $p$ on each side, and by induction, we are finished (base case of only variables is obvious). We proved that any two compositions of the $p_n$ are distinct. We will now show that there are many ways to compose the polynomials. This will yield many bijective polynomials. This is now a combinatorial problem. Instead of writing $p(p(., .),.)$ we'll write $((.\ .)\ .)$, so we erase all the $p$'s and leave only the brackets and dots instead of variables. As there are $n!$ ways to order the variables, we'll show that there are at least $C(n)-C(n-1)$ distinct ways to place brackets around $n$ dots so that 1. there are no two pairs of brackets that are "the same" like $((.\ .))$ (this corresponds to each polynomial having at least 2 arguments), 2. each bracket contains at least 2 points (again, 2 arguments), and finally 3. there also needs to be a bracket collecting everything (the outermost $p$). For each arrangment of $n$ pairs of brackets, we can add $n$ points by marking a point after every opening bracket. For example \[(()())()\]turns into \[(.(.)(.))(.)\]This will satisfy 1 automatically. To get 2 we delete all brackets containing only one point, and for 3 we add an additional big pair of brackets that surround everything. Thus $(()())()$ turns into \[((.\ .\ .).)\]You can check that this is an injective map from the set of legal arrangments of $n$ pairs of brackets (of size $C(n)$). The only problem is: if in the initial configuration there was a pair of brackets containing everything, then adding an additional pair will break 1. The number of these configurations is $C(n-1)$ (the number of arrangments of the other $n-1$ brackets). Thus there are $C(n-1)$ "bad" configurations. so at least \[C(n)-C(n-1)\]"good" bracket configurations for the original problem. This finishes the proof.