\( \def\<#1>{\left<#1\right>} \newcommand{\OR}[1]{\mathrm{OR}(#1)} \newcommand{\XOR}[1]{\mathrm{XOR}(#1)} \newcommand{\twoXOR}[1]{\mathrm{XOR}_2(#1)} \newcommand{\SUM}[1]{\mathrm{SUM}(#1)} \newcommand{\GEN}{\mathrm{GEN}} \newcommand{\Cov}{c} \newcommand{\Dec}{d} \newcommand{\tr}{\mathrm{tr}} \newcommand{\pr}{\mathrm{rk}_+} \newcommand{\br}{\mathrm{rk}_{\lor}} \newcommand{\Geq}{\succcurlyeq} \newcommand{\Leq}{\preccurlyeq} \)

Solution of Problem 7.7

This is an extension of previous comment.

Recall that the direct sum of matrices $A$ and $B$ is the matrix $A\boxplus B=\left[\begin{smallmatrix} A & 0 \\ 0 & B\end{smallmatrix}\right]$. For SUM and OR complexities, we have that (cf., e.g. Thm. 3.18): \[ \SUM{A\boxplus B}=\SUM{A}+\SUM{B}\ \ \mbox{ and }\ \ \OR{A\boxplus B}=\OR{A}+\OR{B}\,. \] But what about the XOR-complexity? Does then \[ \XOR{A\boxplus B}\geq \XOR{A}+\XOR{B}\qquad (*) \] also hold?

Igor recently observed that (*) does not hold: the complexity of direct sum of matrices may be smaller than the sum of their complexities.

Take a boolean $n\times n$ matrix $A$ such that $\XOR{A}\sim n^2/2\log n$; by Thm. 2.2, we know that such matrices exist.

Let $M_n$ be the Kronecker product of the identity matrix $I_n$ with $A$, that is, \[ M_1=A\ \ \mbox{ and } \ \ M_{n+1}=\left[\begin{matrix} A & 0 \\ 0 & M_n\end{matrix}\right] \] Held (*), then we would have that $\XOR{M_n}\geq n\cdot \XOR{A}=\Omega(n^3/\log n)$.

But the outputs of an XOR circuit for $y=M_n\cdot x$ are the entries of the product matrix $Y=A\cdot X$, where $X$ is the vector $x$ written as an $n\times n$ matrix. By using the Strassen algorithm, all these entries can be computed over $GF(2)$ using only $O(n^{\omega})$ operations, where $\omega < 2.81$ (currently improved to $\omega < 2.4$, see here), which is much smaller than $\Omega(n^3/\log n)$, a contradiction. $\Box$

May 22, 2015


Solution of Problem 7.7 [ADDED on Juni 4, 2017]

Actually, the same observation also resolves (affirmatively) Problem 7.7. The point is that, in depth-2, inequality (*) holds: \[ \twoXOR{A\boxplus B}\geq \twoXOR{A}+\twoXOR{B}\qquad (**) \] Proof: This can be seen by taking an optimal XOR-circuit of depth $2$ for $A\boxplus B$. Take two copies $v_A$ and $v_B$ of each node $v$ on the middle level, and redirect edges as follows, where $R_A$ and $C_A$ are the rows and the columns of $A$ in $A\boxplus B$, and similarly for $R_B$ and $C_B$:

Suppose now that $(i,j)$ is an entry of one of the two all-$0$ matrices in $A\boxplus B$, that is, $(i,j)\in R_A\times C_B$ or $(i,j)\in R_B\times C_A$. In the original circuit, there could be some length-$2$ paths from $i$ to $j$, and we have destroyed all of them. But this does not matter: the number of such paths (if any) should have been even, and we have replaced these paths by an also even number of paths, namely - zero. So, the new circuit implements both matrices $A$ and $B$. Since we haven't added any new edges, (**) is proved. $\Box$
Theorem (Igor): There exist $N\times N$ matrices $A$ such that \[ \frac{\twoXOR{A}}{\XOR{A}}\succcurlyeq \frac{N^{0.3}}{\log N}\,. \]
Proof: Take the matrix $M_n$ defined above; this is an $N\times N$ matrix for $N=n^2$. By (**), we have $\twoXOR{M_n}\geq n\cdot \twoXOR{A}=\Omega(n^3/\log n)$, whereas fast matrix multiplication algorithms give $\XOR{M_n}=O(n^{\omega})=O(n^{2.4})$. So, the gap is about $n^{0.6}/\log n$, which is about $N^{0.3}/\log N$. $\Box$

Note: Direct sums also yield the (apparently) simplest way to give an "explicit" super-linear lower bound on $\twoXOR{A}$. Let $m=2^{k^2}$, and let $B_1,\ldots,B_m$ be all $k\times k$ $0/1$ matrices. Let $A$ be a direct sum of all these matrices. This is an $n\times n$ matrix for $n=mk$. Almost all of the matrices $B_i$ have $\twoXOR{B_i}=\Omega(k^2/\log k)$. So, (**) yields a lower bound $\twoXOR{A}=\Omega(mk^2/\log k)=\Omega(n\log^{1/2}n/\log\log n)$.