\documentclass[runningheads]{llncs}
\usepackage[margin=1.35in]{geometry}
\usepackage{amssymb,amsmath,authblk}
\setcounter{tocdepth}{3}
\usepackage{graphicx}
\usepackage{tikz}
\usepackage{mathtools}
\usepackage{color}
\usepackage{url}
\urldef{\mailsa}\path|daniel.smith@nist.gov|
\urldef{\mailsb}\path|ray.perlner@nist.gov|
\urldef{\mailsc}\path|dustin.moody@nist.gov|
\newcommand{\keywords}[1]{\par\addvspace\baselineskip
\noindent\keywordname\enspace\ignorespaces#1}
%===============Definitions==========================
\newcommand{\bdf}{\begin{definition}}
\newcommand{\edf}{\end{definition}}
\newcommand{\beq}{\begin{equation}}
\newcommand{\eeq}{\end{equation}}
\newcommand{\bsp}{\begin{split}}
\newcommand{\esp}{\end{split}}
\newtheorem{Thm}{Theorem}
\newtheorem{Lem}{Lemma}
\newtheorem{Cor}{Corollary}
\newtheorem{Prop}{Proposition}[chapter]
\newtheorem{Def}{Definition}
\newtheorem{Rem}{Remark}
\newcommand{\Z}{\mathbb{Z}}
\newcommand{\dc}{\textcolor{red}}
\def\mathbi#1{\textbf{\em #1}}
%===================End Definitions======================
%===============Perspective Definitions=====================
%\def\xa{108}
%\def\ya{-106}
%\def\za{80}
\def\xdirx{-0.996}
\def\xdiry{0.087}
\def\ydirx{0.614}
\def\ydiry{0.43}
\def\zdirx{0.0}
\def\zdiry{1}
%=============End Perspective Definitions====================
%==============Color Definitions=======================
\definecolor{lightgray}{rgb}{0.85,0.85,0.85}
\definecolor{mediumgray}{rgb}{0.75,0.75,0.75}
\definecolor{darkgray}{rgb}{0.45,0.45,0.45}
%============End Color Definitions======================
%===============Notation Macros=====================
\newcommand\restr[2]{{% we make the whole thing an ordinary symbol
\left.\kern-\nulldelimiterspace % automatically resize the bar with \right
#1 % the function
\vphantom{\big|} % pretend it's a little taller at normal size
\right|_{#2} % this is the delimiter
}}
%=============End Notation Macros====================
\begin{document}
\mainmatter
\title{Improved Attacks for Characteristic-2 Parameters of the Cubic ABC Simple Matrix Encryption Scheme}
\titlerunning{Improved Attacks on Cubic Simple Matrix Encryption}
\author[1]{Dustin Moody}
\author[1]{Ray Perlner}
\author[1,2]{Daniel Smith-Tone}
\affil[1]{National Institute of Standards and Technology,
Gaithersburg, Maryland, USA}
\affil[2]{Department of Mathematics, University of Louisville,
Louisville, Kentucky, USA}
\authorrunning{D Moody, R Perlner, \& D Smith-Tone}
\institute{\mailsc, \mailsb, \mailsa}
\toctitle{Lecture Notes in Computer Science}
\tocauthor{Authors' Instructions}
\maketitle
\begin{abstract}
In the last few years multivariate public key cryptography has experienced an infusion of new ideas for encryption. Among these new strategies is the ABC Simple Matrix family of encryption schemes which utilize the structure of a large matrix algebra to construct effectively invertible systems of nonlinear equations hidden by an isomorphism of polynomials. One promising approach to cryptanalyzing these schemes has been structural cryptanalysis, based on applying a strategy similar to MinRank attacks to the discrete differential. These attacks however have been significantly more expensive when applied to parameters using fields of characteristic 2, which have been the most common choice for published parameters. This disparity is especially great for the cubic version of the Simple Matrix Encryption Scheme.
In this work, we demonstrate a technique that can be used to implement a structural attack which is as efficient against parameters of characteristic 2 as are attacks against analogous parameters over higher characteristic fields. This attack demonstrates that, not only is the cubic simple matrix scheme susceptible to structural attacks, but that the published parameters claiming 80 bits of security are less secure than claimed (albeit only slightly.) Similar techniques can also be applied to improve structural attacks against the original Simple Matrix Encryption scheme, but they represent only a modest improvement over previous structural attacks. This work therefore demonstrates that choosing a field of characteristic 2 for the Simple Matrix Encryption Scheme or its cubic variant will not provide any additional security value.
\keywords{multivariate public key cryptography, differential invariant, MinRank, encryption}
\end{abstract}
\section{Introduction}
The National Institute of Standards and Technology (NIST) is currently engaged in an effort to update the public key infrastructure, providing alternatives to the classical public key schemes based on arithmetic constructions. The discovery by Peter Shor in the 1990s of efficient algorithms for factoring and computing discrete logarithms, see \cite{Shor}, accelerated research towards building the necessary class of computers, those that Feynman famously suggested in \cite{Feynman:1981tf}: quantum computers. There has been growing interest among scientists in our discipline in the years since, to provide protocols and algorithms that are post-quantum, that is, secure in the quantum model of computing. The recent publication by (NIST), see \cite{CFP}, of a call for proposals for post-quantum standards directly addresses the challenge of migration towards a more diverse collection of tools for our public key infrastructure.
Public key schemes based on the difficulty of inverting nonlinear systems of equations provide one possibility for post-quantum security. Multivariate Public Key Cryptography (MPKC) is a reasonable option because the problem of solving systems of nonlinear equations, even if only quadratic, is known to be NP-complete; thus, the generic problem is likely beyond the reach of quantum adversaries. Furthermore, there are a variety of standard techniques to metamorphosize multivariate schemes, to introduce new properties, to enhance security, to reduce power consumption, to resist side-channel analysis, etc. %One possible candidate for practical, efficient, and nonconforming solutions to some of the most consequential public key applications is Multivariate Public Key Cryptography(MPKC). Multivariate schemes are attractive in certain applications because of the maleability of the schemes. Different modifications of similar ideas can make a scheme more suited to lightweight architectures, enhance security, or parametrize various aspects of performance.
%In addition, MPKC is one among a few serious candidates to have risen to prominence as post-quantum options. The fundamental problem of solving a system of quadratic equations is known to be NP-hard, and so in the worst case, solving a system of generic quadratic equations is unfeasible for a classical computer; neither is there any indication that the task is easier in the quantum computing paradigm. % Furthermore, experience indicates that this problem is hard even in the average case; thus multivariate cryptosystems at least have a chance of being difficult to break. Secondly, multivariate cryptosystems are often very efficient, see \cite{boyin,boyin2,boyin3}. Finally, such cryptosystems can be very amenable to the user demands, with multiple parameters hidden within the system which can be altered by the user to achieve different performance goals.
There are numerous long-lived multivariate digital signature schemes. All of UOV \cite{uov}, HFE- \cite{Patarin2}, and HFEv- \cite{DBLP:conf/ctrsa/PatarinCG01} have been studied for around two decades. Moreover, some of the above schemes have optimizations which have strong theoretical support or have stood unbroken in the literature for some time. Notable among these are UOV, which has a cyclic variant \cite{DBLP:conf/indocrypt/PetzoldtBB10} that dramatically reduces the key size, and Gui \cite{DBLP:conf/asiacrypt/PetzoldtCYTD15}, an HFEv- scheme, that, due to tighter bounds on the complexity of algebraically solving the underlying system of equations, see \cite{DBLP:conf/pqcrypto/DingY13}, has much more aggressive parameters than QUARTZ, see \cite{DBLP:conf/ctrsa/PatarinCG01}.
Multivariate public key encryption, however, has a much rockier history. Several attempts at multivariate encryption, see \cite{DBLP:conf/asiacrypt/GoubinC00,DBLP:conf/pqcrypto/TsujiiGTF10} for example, have been shown to be weak based on rank or differential weaknesses. Recently, a new framework for developing secure multivariate encryption schemes has surfaces, drawing on the idea that it may impose sufficiently few restrictions on a multivariate map to be merely an injective map into a much larger codomain instead of being essentially a permutation. A few interesting attempts to achieve multivariate encryption have originated from this thought. ZHFE, see \cite{DBLP:conf/pqcrypto/PorrasBD14}, the quadratic and cubic variants of the ABC Simple Matrix Scheme, see \cite{DBLP:conf/pqcrypto/TaoDTD13} and \cite{DBLP:conf/pqcrypto/DingPW14}, and Extension Field Cancellation, see \cite{DBLP:conf/pqcrypto/SzepieniecDP16}, all use fundamentally new structures for the derivation of an encryption system.
A few of the above schemes have already suffered some setbacks. A questionable rank property in the public key of ZHFE presented in \cite{DBLP:conf/pqcrypto/PerlnerS16} makes this scheme appear dubious, while it was shown that the quadratic Simple Matrix structure leaves the signature of a differential invariant in the public key which is exploited in \cite{DBLP:conf/pqcrypto/MoodyPS14} to effect an attack.
The case of the Cubic Simple Matrix encryption scheme is more interesting; the authors in \cite{DBLP:conf/pqcrypto/DingPW14} present a heuristic argument for security and suggest the possibility of provable security for the scheme. These provable security claims were undermined in \cite{conf/sac/MoodyPS16}, however, with the presentation of a key recovery attack on a full scale version of the Cubic Simple Matrix encryption scheme. The complexity of the attack was on the order of $q^{s+2}$ for characteristic $p>3$, $q^{s+3}$ for characteristic $3$, and $q^{2s+6}$ for characteristic $2$. Here $s$ is the dimension of the matrices in the scheme, and $q$ is the cardinality of the finite field used. This technique was an extension and augmentation of the technique of \cite{DBLP:conf/pqcrypto/MoodyPS14}, and similarly exploited a differential invariant property of the core map to perform a key recovery attack. Nonetheless, the much higher complexity of this attack for characteristic $2$ left open the possibility that there may be some security advantage to using a cubic ABC map over a field with characteristic 2.
In this paper, we present an attack whose complexity is on the order of $q^{s+2}$ for all characteristics. Similar techniques can also improve the complexity of attacks against characteristic $2$ parameters for the original quadratic version of the ABC cryptosystem, from $q^{s+4}$ (reported in \cite{DBLP:conf/pqcrypto/MoodyPS14}) to $q^{s+2}$. %\dc{Ray has commented out how the attack works on the 80-bit and 100-bit parameters. We may wish to include this as many people will only read the intro, and not the whole paper.}
Specifically, our technique improves the complexity of attacking CubicABC($q=2^8$,$s=7$), designed for $80$-bit security, from the horrendous value of $2^{177}$ in \cite{conf/sac/MoodyPS16} to approximately $2^{88}$ operations, the same as the direct algebraic attack complexity reported in \cite{DBLP:conf/pqcrypto/DingPW14}. More convincing is our attack on CubicABC($q=2^8$,$s=8$), designed for $100$-bit security. We break the scheme in approximately $2^{98}$ operations. Furthermore, the attack is fully parallelizable and requires very little memory; hence, our technique is asymptotically far more efficient than algebraic attacks, the basis for the original security estimation. Thus, the security claims in \cite{DBLP:conf/pqcrypto/DingPW14} not only fail to hold in the odd characteristic case, they fail to hold in characteristic two as well.
%In light of NIST's recommendation to migrate to 112 bit security, see \cite{SP800-131A}, and the fact that the attack is more effective with larger values of $s$, we are forced to conclude that the CubicABC is broken.
The paper is organized as follows. In the next section, we present the structure of the Cubic ABC Simple Matrix encryption scheme. In the following section, the fingerprint of the matrix algebra used in the construction of the ABC scheme is exposed. In the subsequent section, the effect of this structure on minrank calculations is determined. We then calculate the complexity of the full attack including the linear algebra steps required for full key recovery. Finally, we review these results and discuss the security of the Cubic ABC scheme and its quadratic counterpart moving forward.
\section{The Cubic ABC Matrix Encryption Scheme}
In \cite{DBLP:conf/pqcrypto/DingPW14}, the Cubic ABC Matrix encryption scheme is proposed. The motivation behind the scheme is to use a large matrix algebra over a finite field to construct an easily invertible cubic map. The construction uses matrix multiplication to combine random linear and quadratic formulae into cubic formulae in a way that allows a user with knowledge of the structure of the matrix algebra and the polynomial isomorphism used to compose the scheme to invert the map.
Let $k=\mathbb{F}_q$ be a finite field. Linear forms and variables over $k$ will be denoted with lower case letters. Vectors of any dimension over $k$ will be denoted with bold font, $\mathbf{v}$. Fix $s\in\mathbb{N}$ and set $n=s^2$ and $m=2s^2$. An element of a matrix ring $M_d(k)$ or the linear transformations they represent, will be denoted by upper case letters, such as $M$. When the entries of the matrix are being considered functions of a variable, the matrix will be denoted $M(\mathbf{x})$. Let $\phi:M_{s\times 2s}(k)\rightarrow k^{2s^2}$ represent the vector space isomorphism sending a matrix to the column vector consisting of the concatenation of its rows. The output of this map, being a vector, will be written with bold font; however, to indicate the relationship to its matrix preimage, it will be denoted with an upper case letter, such as $\mathbf{M}$.
The scheme utilizes an isomorphism of polynomials to hide the internal structure. Let $\mathbf{x}=\left[\begin{matrix}x_1,x_2,\ldots,x_n\end{matrix}\right]^{\top}\in k^n$ denote plaintext while $\mathbf{y}=\left[\begin{matrix}y_1,\ldots,y_m\end{matrix}\right]\in k^m$ denotes ciphertext. Fix two invertible linear transformations $T\in M_m(k)$ and $U\in M_n(k)$. (One may use affine transformations, but there is no security or performance benefit in doing so.) Denote the input and output of the central map by $\mathbf{u}=U\mathbf{x}$ and $\mathbf{v}=T^{-1}(\mathbf{y})$.
The construction of the central map is as follows. Define three $s\times s$ matrices $A$, $B$, and $C$ in the following way:
\[
A=\left[\begin{matrix}
p_1 & p_2 & \cdots & p_s\\
p_{s+1}& p_{s+2} & \cdots & p_{2s}\\
\vdots & \vdots & \ddots & \vdots\\
p_{s^2-s+1} & p_{s^2-s+2} & \cdots & p_{s^2}\\\end{matrix}\right], B=\left[\begin{matrix}b_1 & b_2 & \cdots & b_s\\
b_{s+1}& b_{s+2} & \cdots & b_{2s}\\
\vdots & \vdots & \ddots & \vdots\\
b_{s^2-s+1} & b_{s^2-s+2} & \cdots & b_{s^2}\\\end{matrix}\right],
\]
and
\[
C=\left[\begin{matrix}
c_1 & c_2 & \cdots & c_s\\
c_{s+1}& c_{s+2} & \cdots & c_{2s}\\
\vdots & \vdots & \ddots & \vdots\\
c_{s^2-s+1} & c_{s^2-s+2} & \cdots & c_{s^2}\\\end{matrix}\right].
\]
Here the $p_i$ are quadratic forms on $\mathbf{u}$ chosen independently and uniformly at random from among all quadratic forms and the $b_i$ and $c_i$ are linear forms on $\mathbf{u}$ chosen independently and uniformly at random from among all linear forms.
We define two $s\times s$ matrices $E_1=AB$ and $E_2=AC$. Since $A$ is quadratic and $B$ and $C$ are linear in $u_i$, $E_1$ and $E_2$ are cubic in the $u_i$. The central map $\mathcal{E}$ is defined by
\[
\mathcal{E}=\phi\circ(E_1||E_2).
\]
Thus $\mathcal{E}$ is an $m$ dimensional vector of cubic forms in $\mathbf{u}$. Finally, the public key is given by $\mathcal{F}=T\circ\mathcal{E}\circ U$.
Encryption with this system is standard: given a plaintext $(x_1,\ldots,x_n)$, compute $(y_1,\ldots,y_m)=\mathcal{F}(x_1,\ldots,x_n)$. Decryption is somewhat more complicated.
To decrypt, one inverts each of the private maps in turn: apply $T^{-1}$, invert $\mathcal{E}$, and apply $U^{-1}$. To ``invert'' $\mathcal{E}$, one assumes that $A(\mathbf{u})$ is invertible, and forms a matrix
\[
A^{-1}(\mathbf{u})=\left[\begin{matrix}
w_1 & w_2 & \cdots & w_s\\
w_{s+1}& w_{s+2} & \cdots & w_{2s}\\
\vdots & \vdots & \ddots & \vdots\\
w_{s^2-s+1} & w_{s^2-s+2} & \cdots & w_{s^2}\\\end{matrix}\right],
\]
where the $w_i$ are indeterminants. Then collectinging the relations $A^{-1}(\mathbf{u})E_1(\mathbf{u})=B(\mathbf{u})$ and $A^{-1}(\mathbf{u})E_2(\mathbf{u})=C(\mathbf{u})$, we have $m=2s^2$ linear equations in $2n=2s^2$ unknowns $w_i$ and $u_i$. %(We note here that it would be more correct to say $A^{-1}(\bar{u})E_1(\bar{u})=B(\bar{u})$ and $A^{-1}(\bar{u})E_2(\bar{u})=C(\bar{u})$, since the values of these matrices depend on $\bar{u}$.)
Using, for example, Gaussian elimination one can eliminate all of the variables $w_i$ and most of the $u_i$. The resulting relations can be substituted back into $E_1(\mathbf{u})$ and $E_2(\mathbf{u})$ to obtain a large system of equations in very few variables which can be solved efficiently in a variety of ways.
\section{The Structure of the Cubic ABC scheme} \label{sec:diffstructcubic}
\subsection{Column Band Spaces}\label{sec:bandspace}
Each component of the central $\mathcal{E}(\mathbf{u})=E_1(\mathbf{u})||E_2(\mathbf{u})$ map may be written as:
$$
%\label{E1comp}
\mathcal{E}_{(i-1)s+j}=\sum_{l=1}^{s}p_{(i-1)s+l}b_{(l-1)s+j},
$$
for the $E_1$ equations, and likewise, for the $E_2$ equations:
$$
%\beq\label{E2comp}
\mathcal{E}_{s^2+(i-1)s+j}=\sum_{l=1}^{s}p_{(i-1)s+l}c_{(l-1)s+j}
$$
where $i$ and $j$ run from 1 to $s$.
Consider the $s$ sets of $s$ polynomials that form the columns of $E_1$, i.e. for each $j\in\{1,\ldots,s\}$ consider $(\mathcal{E}_{ j}, \mathcal{E}_{s+j}, \ldots , \mathcal{E}_{s^2-s+ j})$. With high probability, the linear forms $b_{ j}, b_{s+j}, \ldots , b_{s^2-s+ j}$ are linearly independent, and if so the polynomials may be re-expressed, using a linear change of variables to $(u'_1, \ldots u'_{s^2})$ where
$u'_i = b_{(i-1)s+j}$ for $i = 1, \ldots, s$. After the change of variables, the only cubic monomials contained in $(\mathcal{E}_{ j}, \mathcal{E}_{s+j}, \ldots , \mathcal{E}_{s^2-s+ j})$ will be those containing at least one factor of $u'_1, \ldots, u'_s$. We can make a similar change of variables to reveal structure in the $s$ sets of $s$ polynomials that form the columns of $E_2$: Setting $u'_i = c_{(i-1)s+j}$ for $i = 1, \ldots, s$ and a fixed $j$, the only cubic monomials contained in $(\mathcal{E}_{s^2+ j}, \mathcal{E}_{s^2+s+j}, \ldots , \mathcal{E}_{2s^2-s+ j})$ will be those containing at least one factor of $u'_1, \ldots, u'_s$.
More generally, we can make a similar change of variables to reveal structure in any of a large family of $s$ dimensional subspaces of the span of the component polynomials of $E_1$ and $E_2$, which we will call column band spaces in analogy to the band spaces used to analyze the quadratic ABC cryptosystem in \cite{DBLP:conf/pqcrypto/MoodyPS14}. Each family is defined by a fixed linear combination, $(\beta, \gamma)$, of the columns of $E_1$ and $E_2$:
\begin{Def}\label{bandspacedef}
The column band space defined by the $2s$-dimensional linear form $(\beta, \gamma)$ is the space of cubic maps, $\mathcal{B}_{\beta, \gamma}$ , given by:
\[
\mathcal{B}_{\beta, \gamma}=\mbox{Span} (\mathcal{E}_{\beta, \gamma,1}, \ldots, \mathcal{E}_{\beta, \gamma,s}),
\]
where
\[
\mathcal{E}_{\beta, \gamma, i} = \sum_{j=1}^s (\beta_j \mathcal{E}_{(i-1)s+j}+ \gamma_j \mathcal{E}_{s^2+(i-1)s+j})\]
\[
=\sum_{l=1}^s\left(p_{(i-1)s+ l}\sum_{j=1}^s \left(\beta_j b_{(l-1)s+j}+ \gamma_j c_{(l-1)s+j}\right)\right).
\]
\end{Def}
Note that under a change of variables
\[
(x_1,\ldots,x_{s^2})\xmapsto{M}(u'_1, \ldots u'_{s^2})\mbox{, where }u'_i = \sum_{j=1}^s\left(\beta_j b_{(i-1)s+j}+ \gamma_j c_{(i-1)s+j}\right)\mbox{ for }i = 1, \ldots, s,
\]
the only cubic monomials contained in the elements of $\mathcal{B}_{\beta, \gamma}$ will be those containing at least one factor of $u'_1, \ldots, u'_s$.
In such a basis, the third formal derivative, or the $3$-tensor of third partial derivatives
\[
D^3\mathcal{E}=\sum_{i,j,k}\frac{\partial^3\mathcal{E}}{\partial u'_i\partial u'_j\partial u'_k}du'_i\otimes du'_j\otimes du'_k,
\]
of any map $\mathcal{E}\in\mathcal{B}_{\beta,\gamma}$ has a special block form, see Figure \ref{fig:bsd}. This tensor is the same as the one used for the attack in \cite{conf/sac/MoodyPS16}, although in that case it was computed using the discrete differential. There are, however, a number of disadvantages to using this 3-tensor to represent the structural features of cubic ABC. In particular, when defined over a field of characteristic 2, the symmetry of the 3-tensor results in the loss of any information about coefficients for monomials of the form $x_i^2x_j$, since the 3rd derivatave of such a monomial is always 0. We will therefore use a different tool to express the structure of cubic ABC.
Using the same $u'$ basis as above, we see that the gradient $\nabla_{u'}\mathcal{E}$ produces a covector of quadratic forms, which can be though of as a quadratic map that takes any vector $w$ of the form
\[
(0, \ldots, 0, u'_{s+1}(\mathbf{w}), \ldots, u'_{s^2}(\mathbf{w}))^{\top},
\]
to a covector of the form
\[
(y(u'_1), \ldots, y(u'_s), 0, \ldots, 0).
\]
Note that, by the chain rule, we can relate $\nabla_{u'}\mathcal{E}=\left[\frac{\partial\mathcal{E}}{\partial u'_1},\ldots,\frac{\partial\mathcal{E}}{\partial u'_{s^2}}\right]$ to the formal derivative defined over the public basis:
\[
\nabla\mathcal{E} = \left[\frac{\partial\mathcal{E}}{\partial x_1},\ldots,\frac{\partial\mathcal{E}}{\partial x_{s^2}}\right] = \nabla_{u'}\mathcal{E}\left[\frac{du'_j}{dx_i}\right]_{i,j}
\]
using the nonsingular change of basis matrix whose entries are $\frac{du'_j}{dx_i}.$ We can therefore conclude that even defined over the public basis, the first formal derivative of any map $\mathcal{E}\in\mathcal{B}_{\beta,\gamma}$ is a quadratic map that takes an $s^2-s$ dimensional space of vectors to an $s$ dimensional space of covectors.
We will define the term ``band kernel" to describe this $s^2-s$ dimensional space of vectors (including $\mathbf{w}$) which are mapped to an $s$ dimensonal image space by the first formal derivative of $\mathcal{E}$.
\begin{Def}\label{bandkerneldef}
The band kernel of $\mathcal{B}_{\beta, \gamma}$, denoted $\mathcal{BK}_{\beta, \gamma}$, is the space of vectors $x$, such that
\[u'_i = \sum_{j=1}^s\left(\beta_j b_{(i-1)s+j}(x)+ \gamma_j c_{(i-1)s+j}(x)\right) =0,\]
for $i=1, \ldots, s$.
\end{Def}
\begin{figure}[!ht]
\centering
\begin{tikzpicture}
\foreach \d in {4}{%
\pgfmathsetmacro\xxa{0}
\pgfmathsetmacro\yya{0}
\pgfmathsetmacro\zza{0}
\pgfmathsetmacro\xxb{\d}
\pgfmathsetmacro\yyb{0}
\pgfmathsetmacro\zzb{0}
\pgfmathsetmacro\xxc{\d}
\pgfmathsetmacro\yyc{0}
\pgfmathsetmacro\zzc{\d}
\pgfmathsetmacro\xxd{\d-1}
\pgfmathsetmacro\yyd{0}
\pgfmathsetmacro\zzd{\d}
\pgfmathsetmacro\xxe{\d-1}
\pgfmathsetmacro\yye{0}
\pgfmathsetmacro\zze{1}
\pgfmathsetmacro\xxf{0}
\pgfmathsetmacro\yyf{0}
\pgfmathsetmacro\zzf{1}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\thirdx{\xxc*\xdirx+\yyc*\ydirx+\zzc*\zdirx}
\pgfmathsetmacro\fourthx{\xxd*\xdirx+\yyd*\ydirx+\zzd*\zdirx}
\pgfmathsetmacro\fifthx{\xxe*\xdirx+\yye*\ydirx+\zze*\zdirx}
\pgfmathsetmacro\sixthx{\xxf*\xdirx+\yyf*\ydirx+\zzf*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\pgfmathsetmacro\thirdy{\xxc*\xdiry+\yyc*\ydiry+\zzc*\zdiry}
\pgfmathsetmacro\fourthy{\xxd*\xdiry+\yyd*\ydiry+\zzd*\zdiry}
\pgfmathsetmacro\fifthy{\xxe*\xdiry+\yye*\ydiry+\zze*\zdiry}
\pgfmathsetmacro\sixthy{\xxf*\xdiry+\yyf*\ydiry+\zzf*\zdiry}
\fill [fill=mediumgray] (\firstx,\firsty) -- (\secondx,\secondy) -- (\thirdx,\thirdy) -- (\fourthx,\fourthy) -- (\fifthx,\fifthy) -- (\sixthx,\sixthy) -- (\firstx,\firsty);
}
\foreach \d in {4}{%
\pgfmathsetmacro\xxa{\d-1}
\pgfmathsetmacro\yya{0}
\pgfmathsetmacro\zza{\d}
\pgfmathsetmacro\xxb{\d}
\pgfmathsetmacro\yyb{0}
\pgfmathsetmacro\zzb{\d}
\pgfmathsetmacro\xxc{\d}
\pgfmathsetmacro\yyc{\d}
\pgfmathsetmacro\zzc{\d}
\pgfmathsetmacro\xxd{0}
\pgfmathsetmacro\yyd{\d}
\pgfmathsetmacro\zzd{\d}
\pgfmathsetmacro\xxe{0}
\pgfmathsetmacro\yye{\d-1}
\pgfmathsetmacro\zze{\d}
\pgfmathsetmacro\xxf{\d-1}
\pgfmathsetmacro\yyf{\d-1}
\pgfmathsetmacro\zzf{\d}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\thirdx{\xxc*\xdirx+\yyc*\ydirx+\zzc*\zdirx}
\pgfmathsetmacro\fourthx{\xxd*\xdirx+\yyd*\ydirx+\zzd*\zdirx}
\pgfmathsetmacro\fifthx{\xxe*\xdirx+\yye*\ydirx+\zze*\zdirx}
\pgfmathsetmacro\sixthx{\xxf*\xdirx+\yyf*\ydirx+\zzf*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\pgfmathsetmacro\thirdy{\xxc*\xdiry+\yyc*\ydiry+\zzc*\zdiry}
\pgfmathsetmacro\fourthy{\xxd*\xdiry+\yyd*\ydiry+\zzd*\zdiry}
\pgfmathsetmacro\fifthy{\xxe*\xdiry+\yye*\ydiry+\zze*\zdiry}
\pgfmathsetmacro\sixthy{\xxf*\xdiry+\yyf*\ydiry+\zzf*\zdiry}
\fill [fill=lightgray] (\firstx,\firsty) -- (\secondx,\secondy) -- (\thirdx,\thirdy) -- (\fourthx,\fourthy) -- (\fifthx,\fifthy) -- (\sixthx,\sixthy) -- (\firstx,\firsty);
}
\foreach \d in {4}{%
\pgfmathsetmacro\xxa{0}
\pgfmathsetmacro\yya{0}
\pgfmathsetmacro\zza{0}
\pgfmathsetmacro\xxb{0}
\pgfmathsetmacro\yyb{0}
\pgfmathsetmacro\zzb{1}
\pgfmathsetmacro\xxc{0}
\pgfmathsetmacro\yyc{\d-1}
\pgfmathsetmacro\zzc{1}
\pgfmathsetmacro\xxd{0}
\pgfmathsetmacro\yyd{\d-1}
\pgfmathsetmacro\zzd{\d}
\pgfmathsetmacro\xxe{0}
\pgfmathsetmacro\yye{\d}
\pgfmathsetmacro\zze{\d}
\pgfmathsetmacro\xxf{0}
\pgfmathsetmacro\yyf{\d}
\pgfmathsetmacro\zzf{0}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\thirdx{\xxc*\xdirx+\yyc*\ydirx+\zzc*\zdirx}
\pgfmathsetmacro\fourthx{\xxd*\xdirx+\yyd*\ydirx+\zzd*\zdirx}
\pgfmathsetmacro\fifthx{\xxe*\xdirx+\yye*\ydirx+\zze*\zdirx}
\pgfmathsetmacro\sixthx{\xxf*\xdirx+\yyf*\ydirx+\zzf*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\pgfmathsetmacro\thirdy{\xxc*\xdiry+\yyc*\ydiry+\zzc*\zdiry}
\pgfmathsetmacro\fourthy{\xxd*\xdiry+\yyd*\ydiry+\zzd*\zdiry}
\pgfmathsetmacro\fifthy{\xxe*\xdiry+\yye*\ydiry+\zze*\zdiry}
\pgfmathsetmacro\sixthy{\xxf*\xdiry+\yyf*\ydiry+\zzf*\zdiry}
\fill [fill=darkgray] (\firstx,\firsty) -- (\secondx,\secondy) -- (\thirdx,\thirdy) -- (\fourthx,\fourthy) -- (\fifthx,\fifthy) -- (\sixthx,\sixthy) -- (\firstx,\firsty);
}
\foreach \d in {3}{%
\pgfmathsetmacro\xxa{0}
\pgfmathsetmacro\yya{0}
\pgfmathsetmacro\zza{1}
\pgfmathsetmacro\xxb{\d}
\pgfmathsetmacro\yyb{0}
\pgfmathsetmacro\zzb{1}
\pgfmathsetmacro\xxc{\d}
\pgfmathsetmacro\yyc{\d}
\pgfmathsetmacro\zzc{1}
\pgfmathsetmacro\xxd{0}
\pgfmathsetmacro\yyd{\d}
\pgfmathsetmacro\zzd{1}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\thirdx{\xxc*\xdirx+\yyc*\ydirx+\zzc*\zdirx}
\pgfmathsetmacro\fourthx{\xxd*\xdirx+\yyd*\ydirx+\zzd*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\pgfmathsetmacro\thirdy{\xxc*\xdiry+\yyc*\ydiry+\zzc*\zdiry}
\pgfmathsetmacro\fourthy{\xxd*\xdiry+\yyd*\ydiry+\zzd*\zdiry}
\fill [fill=lightgray] (\firstx,\firsty) -- (\secondx,\secondy) -- (\thirdx,\thirdy) -- (\fourthx,\fourthy) -- (\firstx,\firsty);
}
\foreach \d in {3}{%
\pgfmathsetmacro\xxa{0}
\pgfmathsetmacro\yya{\d}
\pgfmathsetmacro\zza{1}
\pgfmathsetmacro\xxb{\d}
\pgfmathsetmacro\yyb{\d}
\pgfmathsetmacro\zzb{1}
\pgfmathsetmacro\xxc{\d}
\pgfmathsetmacro\yyc{\d}
\pgfmathsetmacro\zzc{\d+1}
\pgfmathsetmacro\xxd{0}
\pgfmathsetmacro\yyd{\d}
\pgfmathsetmacro\zzd{\d+1}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\thirdx{\xxc*\xdirx+\yyc*\ydirx+\zzc*\zdirx}
\pgfmathsetmacro\fourthx{\xxd*\xdirx+\yyd*\ydirx+\zzd*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\pgfmathsetmacro\thirdy{\xxc*\xdiry+\yyc*\ydiry+\zzc*\zdiry}
\pgfmathsetmacro\fourthy{\xxd*\xdiry+\yyd*\ydiry+\zzd*\zdiry}
\fill [fill=mediumgray] (\firstx,\firsty) -- (\secondx,\secondy) -- (\thirdx,\thirdy) -- (\fourthx,\fourthy) -- (\firstx,\firsty);
}
\foreach \d in {3}{%
\pgfmathsetmacro\xxa{\d}
\pgfmathsetmacro\yya{0}
\pgfmathsetmacro\zza{1}
\pgfmathsetmacro\xxb{\d}
\pgfmathsetmacro\yyb{0}
\pgfmathsetmacro\zzb{\d+1}
\pgfmathsetmacro\xxc{\d}
\pgfmathsetmacro\yyc{\d}
\pgfmathsetmacro\zzc{\d+1}
\pgfmathsetmacro\xxd{\d}
\pgfmathsetmacro\yyd{\d}
\pgfmathsetmacro\zzd{1}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\thirdx{\xxc*\xdirx+\yyc*\ydirx+\zzc*\zdirx}
\pgfmathsetmacro\fourthx{\xxd*\xdirx+\yyd*\ydirx+\zzd*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\pgfmathsetmacro\thirdy{\xxc*\xdiry+\yyc*\ydiry+\zzc*\zdiry}
\pgfmathsetmacro\fourthy{\xxd*\xdiry+\yyd*\ydiry+\zzd*\zdiry}
\fill [fill=darkgray] (\firstx,\firsty) -- (\secondx,\secondy) -- (\thirdx,\thirdy) -- (\fourthx,\fourthy) -- (\firstx,\firsty);
}
\foreach \e in {0,1}{%
\foreach \d in {4}{%
\pgfmathsetmacro\xxa{0}
\pgfmathsetmacro\yya{0}
\pgfmathsetmacro\zza{\e}
\pgfmathsetmacro\xxb{\d}
\pgfmathsetmacro\yyb{0}
\pgfmathsetmacro\zzb{\e}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\draw [thick] (\firstx, \firsty) -- (\secondx, \secondy);
}
}
\foreach \e in {0,1}{%
\foreach \d in {4}{%
\pgfmathsetmacro\xxa{0}
\pgfmathsetmacro\yya{0}
\pgfmathsetmacro\zza{\e}
\pgfmathsetmacro\xxb{0}
\pgfmathsetmacro\yyb{\d}
\pgfmathsetmacro\zzb{\e}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\draw [thick] (\firstx, \firsty) -- (\secondx, \secondy);
}
}
\foreach \e in {3,4}{%
\foreach \d in {3}{%
\pgfmathsetmacro\xxa{\e}
\pgfmathsetmacro\yya{0}
\pgfmathsetmacro\zza{1}
\pgfmathsetmacro\xxb{\e}
\pgfmathsetmacro\yyb{0}
\pgfmathsetmacro\zzb{1+\d}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\draw [thick] (\firstx, \firsty) -- (\secondx, \secondy);
}
}
\foreach \e in {3,4}{%
\foreach \d in {3}{%
\pgfmathsetmacro\xxa{0}
\pgfmathsetmacro\yya{\e}
\pgfmathsetmacro\zza{1}
\pgfmathsetmacro\xxb{0}
\pgfmathsetmacro\yyb{\e}
\pgfmathsetmacro\zzb{1+\d}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\draw [thick] (\firstx, \firsty) -- (\secondx, \secondy);
}
}
\foreach \e in {3,4}{%
\foreach \d in {3}{%
\pgfmathsetmacro\xxa{\e}
\pgfmathsetmacro\yya{0}
\pgfmathsetmacro\zza{4}
\pgfmathsetmacro\xxb{\e}
\pgfmathsetmacro\yyb{4}
\pgfmathsetmacro\zzb{4}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\draw [thick] (\firstx, \firsty) -- (\secondx, \secondy);
}
}
\foreach \e in {3,4}{%
\foreach \d in {3}{%
\pgfmathsetmacro\xxa{0}
\pgfmathsetmacro\yya{\e}
\pgfmathsetmacro\zza{4}
\pgfmathsetmacro\xxb{4}
\pgfmathsetmacro\yyb{\e}
\pgfmathsetmacro\zzb{4}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\draw [thick] (\firstx, \firsty) -- (\secondx, \secondy);
}
}
\foreach \e in {1,2,3}{%
\foreach \d in {1,2,3}{%
\pgfmathsetmacro\xxa{3}
\pgfmathsetmacro\yya{\e}
\pgfmathsetmacro\zza{\d}
\pgfmathsetmacro\xxb{3}
\pgfmathsetmacro\yyb{\e-1}
\pgfmathsetmacro\zzb{\d}
\pgfmathsetmacro\xxc{3}
\pgfmathsetmacro\yyc{\e}
\pgfmathsetmacro\zzc{\d+1}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\thirdx{\xxc*\xdirx+\yyc*\ydirx+\zzc*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\pgfmathsetmacro\thirdy{\xxc*\xdiry+\yyc*\ydiry+\zzc*\zdiry}
\draw [thick] (\thirdx,\thirdy) -- (\firstx, \firsty) -- (\secondx, \secondy);
}
}
\foreach \e in {1,2}{%
\foreach \d in {1,2,3}{%
\pgfmathsetmacro\xxa{\e}
\pgfmathsetmacro\yya{3}
\pgfmathsetmacro\zza{\d}
\pgfmathsetmacro\xxb{\e-1}
\pgfmathsetmacro\yyb{3}
\pgfmathsetmacro\zzb{\d}
\pgfmathsetmacro\xxc{\e}
\pgfmathsetmacro\yyc{3}
\pgfmathsetmacro\zzc{\d+1}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\thirdx{\xxc*\xdirx+\yyc*\ydirx+\zzc*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\pgfmathsetmacro\thirdy{\xxc*\xdiry+\yyc*\ydiry+\zzc*\zdiry}
\draw [thick] (\thirdx,\thirdy) -- (\firstx, \firsty) -- (\secondx, \secondy);
}
}
\foreach \d in {1,2,3}{%
\pgfmathsetmacro\xxa{2}
\pgfmathsetmacro\yya{3}
\pgfmathsetmacro\zza{\d}
\pgfmathsetmacro\xxb{3}
\pgfmathsetmacro\yyb{3}
\pgfmathsetmacro\zzb{\d}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\draw [thick] (\firstx, \firsty) -- (\secondx, \secondy);
}
\foreach \e in {1,2}{%
\foreach \d in {1,2}{%
\pgfmathsetmacro\xxa{\e}
\pgfmathsetmacro\yya{\d}
\pgfmathsetmacro\zza{1}
\pgfmathsetmacro\xxb{\e-1}
\pgfmathsetmacro\yyb{\d}
\pgfmathsetmacro\zzb{1}
\pgfmathsetmacro\xxc{\e}
\pgfmathsetmacro\yyc{\d-1}
\pgfmathsetmacro\zzc{1}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\thirdx{\xxc*\xdirx+\yyc*\ydirx+\zzc*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\pgfmathsetmacro\thirdy{\xxc*\xdiry+\yyc*\ydiry+\zzc*\zdiry}
\draw [thick] (\thirdx,\thirdy) -- (\firstx, \firsty) -- (\secondx, \secondy);
}
}
\foreach \d in {1,2,3}{%
\pgfmathsetmacro\xxa{\d}
\pgfmathsetmacro\yya{2}
\pgfmathsetmacro\zza{1}
\pgfmathsetmacro\xxb{\d}
\pgfmathsetmacro\yyb{3}
\pgfmathsetmacro\zzb{1}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\draw [thick] (\firstx, \firsty) -- (\secondx, \secondy);
}
\foreach \d in {1,2}{%
\pgfmathsetmacro\xxa{2}
\pgfmathsetmacro\yya{\d}
\pgfmathsetmacro\zza{1}
\pgfmathsetmacro\xxb{3}
\pgfmathsetmacro\yyb{\d}
\pgfmathsetmacro\zzb{1}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\draw [thick] (\firstx, \firsty) -- (\secondx, \secondy);
}
\foreach \d in {0,1,2,3,4}{%
\pgfmathsetmacro\xxa{0+\d}
\pgfmathsetmacro\yya{0}
\pgfmathsetmacro\zza{0}
\pgfmathsetmacro\xxb{0+\d}
\pgfmathsetmacro\yyb{0}
\pgfmathsetmacro\zzb{1}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\draw [thick] (\firstx, \firsty) -- (\secondx, \secondy);
}
\foreach \d in {0,1,2,3,4}{%
\pgfmathsetmacro\xxa{0}
\pgfmathsetmacro\yya{0+\d}
\pgfmathsetmacro\zza{0}
\pgfmathsetmacro\xxb{0}
\pgfmathsetmacro\yyb{0+\d}
\pgfmathsetmacro\zzb{1}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\draw [thick] (\firstx, \firsty) -- (\secondx, \secondy);
}
\foreach \d in {2,3,4}{%
\pgfmathsetmacro\xxa{3}
\pgfmathsetmacro\yya{0}
\pgfmathsetmacro\zza{\d}
\pgfmathsetmacro\xxb{4}
\pgfmathsetmacro\yyb{0}
\pgfmathsetmacro\zzb{\d}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\draw [thick] (\firstx, \firsty) -- (\secondx, \secondy);
}
\foreach \d in {2,3,4}{%
\pgfmathsetmacro\xxa{0}
\pgfmathsetmacro\yya{3}
\pgfmathsetmacro\zza{\d}
\pgfmathsetmacro\xxb{0}
\pgfmathsetmacro\yyb{4}
\pgfmathsetmacro\zzb{\d}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\draw [thick] (\firstx, \firsty) -- (\secondx, \secondy);
}
\foreach \d in {1,2}{%
\pgfmathsetmacro\xxa{3}
\pgfmathsetmacro\yya{\d}
\pgfmathsetmacro\zza{4}
\pgfmathsetmacro\xxb{4}
\pgfmathsetmacro\yyb{\d}
\pgfmathsetmacro\zzb{4}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\draw [thick] (\firstx, \firsty) -- (\secondx, \secondy);
}
\foreach \d in {1,2}{%
\pgfmathsetmacro\xxa{\d}
\pgfmathsetmacro\yya{3}
\pgfmathsetmacro\zza{4}
\pgfmathsetmacro\xxb{\d}
\pgfmathsetmacro\yyb{4}
\pgfmathsetmacro\zzb{4}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\draw [thick] (\firstx, \firsty) -- (\secondx, \secondy);
}
\foreach \d in {3}{%
\pgfmathsetmacro\xxa{0}
\pgfmathsetmacro\yya{0}
\pgfmathsetmacro\zza{1}
\pgfmathsetmacro\xxb{0}
\pgfmathsetmacro\yyb{0}
\pgfmathsetmacro\zzb{\d+1}
\pgfmathsetmacro\xxc{\d}
\pgfmathsetmacro\yyc{0}
\pgfmathsetmacro\zzc{\d+1}
\pgfmathsetmacro\xxd{0}
\pgfmathsetmacro\yyd{\d}
\pgfmathsetmacro\zzd{\d+1}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\thirdx{\xxc*\xdirx+\yyc*\ydirx+\zzc*\zdirx}
\pgfmathsetmacro\fourthx{\xxd*\xdirx+\yyd*\ydirx+\zzd*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\pgfmathsetmacro\thirdy{\xxc*\xdiry+\yyc*\ydiry+\zzc*\zdiry}
\pgfmathsetmacro\fourthy{\xxd*\xdiry+\yyd*\ydiry+\zzd*\zdiry}
\draw [thick, dashed] (\firstx,\firsty) -- (\secondx,\secondy) -- (\thirdx,\thirdy);
\draw [thick, dashed] (\secondx,\secondy) -- (\fourthx,\fourthy);
}
\foreach \d in {4}{%
\pgfmathsetmacro\xxa{0}
\pgfmathsetmacro\yya{0}
\pgfmathsetmacro\zza{0}
\pgfmathsetmacro\xxb{0}
\pgfmathsetmacro\yyb{0}
\pgfmathsetmacro\zzb{-1}
\pgfmathsetmacro\xxc{\d}
\pgfmathsetmacro\yyc{0}
\pgfmathsetmacro\zzc{\d}
\pgfmathsetmacro\xxd{\d+1}
\pgfmathsetmacro\yyd{0}
\pgfmathsetmacro\zzd{\d}
\pgfmathsetmacro\xxe{0}
\pgfmathsetmacro\yye{\d}
\pgfmathsetmacro\zze{\d}
\pgfmathsetmacro\xxf{0}
\pgfmathsetmacro\yyf{\d+1}
\pgfmathsetmacro\zzf{\d}
\pgfmathsetmacro\firstx{\xxa*\xdirx+\yya*\ydirx+\zza*\zdirx}
\pgfmathsetmacro\secondx{\xxb*\xdirx+\yyb*\ydirx+\zzb*\zdirx}
\pgfmathsetmacro\thirdx{\xxc*\xdirx+\yyc*\ydirx+\zzc*\zdirx}
\pgfmathsetmacro\fourthx{\xxd*\xdirx+\yyd*\ydirx+\zzd*\zdirx}
\pgfmathsetmacro\fifthx{\xxe*\xdirx+\yye*\ydirx+\zze*\zdirx}
\pgfmathsetmacro\sixthx{\xxf*\xdirx+\yyf*\ydirx+\zzf*\zdirx}
\pgfmathsetmacro\firsty{\xxa*\xdiry+\yya*\ydiry+\zza*\zdiry}
\pgfmathsetmacro\secondy{\xxb*\xdiry+\yyb*\ydiry+\zzb*\zdiry}
\pgfmathsetmacro\thirdy{\xxc*\xdiry+\yyc*\ydiry+\zzc*\zdiry}
\pgfmathsetmacro\fourthy{\xxd*\xdiry+\yyd*\ydiry+\zzd*\zdiry}
\pgfmathsetmacro\fifthy{\xxe*\xdiry+\yye*\ydiry+\zze*\zdiry}
\pgfmathsetmacro\sixthy{\xxf*\xdiry+\yyf*\ydiry+\zzf*\zdiry}
\draw [->] (\firstx,\firsty) -- (\secondx,\secondy);
\draw [->] (\thirdx,\thirdy) -- (\fourthx,\fourthy);
\draw [->] (\fifthx,\fifthy) -- (\sixthx,\sixthy);
\node [above right] at (\fourthx,\fourthy) {$x_1$};
\node [left] at (\sixthx,\sixthy) {$x_2$\phantom{.}};
\node [above right] at (\secondx,\secondy) {$x_3$};
}
\end{tikzpicture}
\caption{$3$-tensor structure of the third formal derivative of a band space map. Solid regions correspond to nonzero coefficients. Transparent regions correspond to zero coefficients.}\label{fig:bsd}
\end{figure}
\section{A Variant of MinRank Exploiting the Column Band Space Structure}\label{sec:modminrank}
A minrank-like attack may be used to locate the column band space maps defined in the previous section. In this case, the attack proceeds by selecting $s^2$-dimensional vectors $\mathbf{w}_1$ and $\mathbf{w}_2$, setting
\begin{align}\label{CubicPubRankEq}
\bsp
\sum_{i=1}^{2s^2}t_i\nabla\mathcal{E}_i(\mathbf{w}_1) = 0,\\
\sum_{i=1}^{2s^2}t_i\nabla\mathcal{E}_i(\mathbf{w}_2) = 0,\\
\esp
\end{align}
and then solving for the $t_i$. The attack succeeds when $\sum_{i=1}^{2s^2}t_i\mathcal{E}_i \in \mathcal{B}_{\beta,\gamma}$, and $\mathbf{x}_1$ and $\mathbf{x}_2$ are within the corresponding band kernel. If these conditions are met, then the 2-tensors
\[
\sum_{i=1}^{2s^2}t_i\mathbf{H}(\mathcal{E}_i)(\mathbf{w}_1)\mbox{ and }\sum_{i=1}^{2s^2}t_i\mathbf{H}(\mathcal{E}_i)(\mathbf{w}_2),
\]
will have rank at most $2s$ (see Figure 2), and this will be easily detectable. Here $\mathbf{H}(\mathcal{E}_i)$ is the Hessian matrix
\[
\mathbf{H}(\mathcal{E}_i):=\left[\begin{matrix}
\frac{\partial^2\mathcal{E}_i}{\partial x_1^2} & \frac{\partial^2\mathcal{E}_i}{\partial x_1\partial x_2} & \cdots & \frac{\partial^2\mathcal{E}_i}{\partial x_1\partial x_{n}}\\
\frac{\partial^2\mathcal{E}_i}{\partial x_1\partial x_2} & \frac{\partial^2\mathcal{E}_i}{\partial x_2^2} & \cdots & \frac{\partial^2\mathcal{E}_i}{\partial x_1\partial x_{n}}\\
\vdots & \vdots & \ddots & \vdots\\
\frac{\partial^2\mathcal{E}_i}{\partial x_{n}\partial x_1} & \frac{\partial^2\mathcal{E}_i}{\partial x_{n}\partial x_2} & \cdots & \frac{\partial^2\mathcal{E}_i}{\partial x_{n}^2}\\
\end{matrix}\right].
\]
\begin{Thm}\label{BandKernelProb}
The probability that 2 randomly chosen vectors, $\mathbf{w}_1$ and $\mathbf{w}_2$, are both in the band kernel of some band space $\mathcal{B}_{\beta,\gamma}$ is approximately $\frac{1}{q-1}$.
%The probability that 3 randomly chosen vectors, $\mathbf{w}_1$, $\mathbf{w}_2$, and $\mathbf{w}_3$, are all in the band kernel of some band space $\mathcal{B}_{\beta,\gamma}$ is
%approximately $\frac{1}{(q-1)q^s}$.
\end{Thm}
\begin{proof}
The condition that the $\mathbf{w}_1$ and $\mathbf{w}_2$ are contained within a band kernel is that there be a nontrivial linear combination of the columns of the following matrix which is equal to zero (i.e. that the matrix has nonzero column corank):
\begin{equation*}
\left[ \begin{matrix}
b_1(\mathbf{w}_1) & b_2(\mathbf{w}_1) & \ldots & b_s(\mathbf{w}_1) & \vline &c_1(\mathbf{w}_1) & c_2(\mathbf{w}_1) & \ldots & c_s(\mathbf{w}_1) \\
b_{s+1}(\mathbf{w}_1)&b_{s+2}(\mathbf{w}_1)&\ldots&b_{2s}(\mathbf{w}_1)& \vline &c_{s+1}(\mathbf{w}_1)&c_{s+2}(\mathbf{w}_1)&\ldots&c_{2s}(\mathbf{w}_1)\\
\vdots& \vdots & \ddots & \vdots & \vline & \vdots& \vdots & \ddots & \vdots\\
b_{s^2-s+1}(\mathbf{w}_1) & b_{s^2-s+2}(\mathbf{w}_1) & \ldots &b_{s^2}(\mathbf{w}_1) & \vline &c_{s^2-s+1}(\mathbf{w}_1) &c_{s^2-s+2}(\mathbf{w}_1) & \ldots &c_{s^2}(\mathbf{w}_1)\\
\hline
b_1(\mathbf{w}_2) & b_2(\mathbf{w}_2) & \ldots & b_s(\mathbf{w}_2) & \vline &c_1(\mathbf{w}_2) & c_2(\mathbf{w}_2) & \ldots & c_s(\mathbf{w}_2) \\
b_{s+1}(\mathbf{w}_2)&b_{s+2}(\mathbf{w}_2)&\ldots&b_{2s}(\mathbf{w}_2)& \vline &c_{s+1}(\mathbf{w}_2)&c_{s+2}(\mathbf{w}_2)&\ldots&c_{2s}(\mathbf{w}_2)\\
\vdots& \vdots & \ddots & \vdots & \vline & \vdots& \vdots & \ddots & \vdots\\
b_{s^2-s+1}(\mathbf{w}_2) & b_{s^2-s+2}(\mathbf{w}_2) & \ldots &b_{s^2}(\mathbf{w}_2) & \vline &c_{s^2-s+1}(\mathbf{w}_2) &c_{s^2-s+2}(\mathbf{w}_2) & \ldots &c_{s^2}(\mathbf{w}_2)\\
%\hline
%\vdots&\vdots& & \vdots & \vline & \vdots&\vdots& & \vdots\\
\end{matrix}\right].
\end{equation*}
The matrix is a uniformly random $2s \times 2s$ matrix, which has nonzero column corank with probability approximately $\frac{1}{q-1}$.
\qed
\end{proof}
\begin{Thm}\label{BandSpaceSolutionProb}
If $\mathbf{w}_1$ and $\mathbf{w}_2$ are chosen in such a way that
they are both in the band kernel of a column band space $\mathcal{B}_{\beta,\gamma}$, and they are linearly independent from one another and statistically
independent from the private quadratic forms, $p_{(i-1)s+ j}$ in the matrix $A$,
then $\mathbf{w}_1$ and $\mathbf{w}_2$ are both in the kernel of the first formal derivative of some column band space map, $\mathcal{E} = \sum_{\mathcal{E}_{\beta,\gamma,i}\in\mathcal{B}_{\beta,\gamma}}\tau_i \mathcal{E}_{\beta,\gamma,i}$ with probability approximately $\frac{1}{(q-1)q^s}$.
\end{Thm}
\begin{proof}
An $\mathcal{E}$ meeting the above condition exists iff there is a nontrivial solution to the following system of equations
\begin{align} \label{BandRankEq}
\bsp
\sum_{\mathcal{E}_{\beta,\gamma,i}\in\mathcal{B}_{\beta,\gamma}}\tau_i \nabla\mathcal{E}_{\beta,\gamma,i}(\mathbf{w}_1)&=0,\\
\sum_{\mathcal{E}_{\beta,\gamma,i}\in\mathcal{B}_{\beta,\gamma}}\tau_i \nabla\mathcal{E}_{\beta,\gamma,i}(\mathbf{w}_2)&=0.\\
\esp
\end{align}
We may express our band space maps in a basis (e.g. the $u'_i$ basis used in Definition~\ref{bandkerneldef}) where the first $s$ basis vectors are chosen to be outside the band kernel, and the remaining $s^2-s$ basis vectors are chosen from within the band kernel. Combining this with Definition 1, we see that the band space maps can be written as
\[
\mathcal{E}_{\beta,\gamma,i} = \sum_{j=1}^s{p_{(i-1)s+ j}u'_j}.
\]
Note that $\mathbf{w}_1$ and $\mathbf{w}_2$ are band kernel vectors, and so for both vectors we have that $u'_j=0$ for $j = 1, \ldots, s$. Therefore, in such a basis, the only formal derivatives of $\mathcal{E}$ that can be nonzero are $\frac{\partial \mathcal{E}}{\partial u'_j} = p_{(i-1)s+ j}$ for $j = 1, \ldots, s$. Thus in order for there to be a nontrivial solution to Equation \eqref{BandRankEq}, it is necessary and sufficient that $\sum_{i=1}^s\tau_ip_{(i-1)s+ j}(\mathbf{w}_k)=0$ for $j = 1, \ldots, s$ and $k = 1,2$.
This condition will be satisfied if and only if the following $2s\times s$ matrix has nonzero column corank:
$$
\left[\begin{matrix}
p_1(\mathbf{w}_1) & p_{s+1} (\mathbf{w}_1) & \cdots & p_{s^2-s+1}(\mathbf{w}_1)\\
p_2(\mathbf{w}_1) & p_{s+2} (\mathbf{w}_1) & \cdots & p_{s^2-s+2}(\mathbf{w}_1)\\
\vdots & \vdots & \ddots & \vdots\\
p_s(\mathbf{w}_1) & p_{2s}(\mathbf{w}_1) & \cdots & p_{s^2}(\mathbf{w}_1).\\
\hline
p_1(\mathbf{w}_2) & p_{s+1}(\mathbf{w}_2) & \cdots & p_{s^2-s+1}(\mathbf{w}_2)\\
p_2(\mathbf{w}_2) & p_{s+2} (\mathbf{w}_2)& \cdots & p_{s^2-s+2}(\mathbf{w}_2)\\
\vdots & \vdots & \ddots & \vdots\\
p_s(\mathbf{w}_2) & p_{2s}(\mathbf{w}_2) & \cdots & p_{s^2}(\mathbf{w}_2)\\
\end{matrix}\right].
$$
This matrix is a random matrix over $k=\mathbb{F}_q$, which has nonzero column corank with
probability approximately $\frac{1}{(q-1)q^s}$, for practical parameters.
\qed
\end{proof}
Combining the results of Theorems \ref{BandKernelProb} and \ref{BandSpaceSolutionProb}, we find that for a random choice of the vectors $\mathbf{w}_1$ and $\mathbf{w}_2$, there is a column band space map among the solutions of Equation \eqref{CubicPubRankEq} with probability approximately $\frac{1}{(q-1)^2q^s}$. It may be somewhat undesirable to choose $\mathbf{w}_1$ and $\mathbf{w}_1$ completely randomly, however. The na\"ive algorithm for constructing the coefficients of Equation \eqref{CubicPubRankEq} for a random choice of $\mathbf{w}_1$ and $\mathbf{w}_2$ requires on the order of $s^8$ field operations. This can be reduced to $s^6$ operations if we make sure that each new choice of $\mathbf{w}_1$ and $\mathbf{w}_2$ differs from the previous choice at only a single coordinate. Then, rather than recomputing Equation \eqref{CubicPubRankEq} from scratch, we can use the previous values of the coefficients and we will only need to include corrections for the monomials that contain the variable that was changed from the previous iteration. Over a large number of iterations, the distribution of $\mathbf{w}_1$ and $\mathbf{w}_2$ should still be sufficiently close to random that the probability of success for the attack will not be meaningfully altered.
One final factor which may increase the cost of attacks is the expected dimension of the solution space of Equation \eqref{CubicPubRankEq}. If this space has a high dimension, then the attack will be slowed down since the attacker much search through a large number of spurious solutions to find a real solution (i.e. one where $\sum_{i=1}^{2s^2}t_i\mathbf{H}(\mathcal{E}_i)(\mathbf{w}_l)$ has rank at most $2s$ for $l = 1, 2$). Fortunately, Equation \eqref{CubicPubRankEq} is a system of $2s^2$ equations in $2s^2$ variables and it generally has a $0$-dimensional space of solutions. The lone exception occurs for characteristic 3. In this case, there are two linear dependencies among the equations, given by $\mathbf{w}_1\left[\nabla\mathcal{E}_i(\mathbf{w}_1)\right]^\top = 0$ and $\mathbf{w}_2\left[\nabla\mathcal{E}_i(\mathbf{w}_2)\right]^\top = 0$. In this situation we would therefore expect a 2-dimensional solution space. We can, however, recover two additional linear constraints on the $t_i$'s by also requiring:
\[
\sum_{i=1}^{2s^2}t_i \mathcal{E}_i(\mathbf{w}_l) = 0,\mbox{ for }l=1,2.\]
When these additional linear constraints are added to those given by Equation \eqref{CubicPubRankEq}, the expected dimension of the solution space drops back to 0. We can therefore assess the cost of the above attack at approximately $s^6q^{s+2}$, regardless of the characteristic.
\section {Application to the Quadratic ABC Scheme}
A similar technique was used to attack the original quadratic version of the ABC cryptosytem in \cite{DBLP:conf/pqcrypto/MoodyPS14}. While this technique was expressed in terms of the discrete differential, it can also be expressed using the formal derivative. In that case, the attack proceeds by selecting two random vectors $\mathbf{w}_1$ and $\mathbf{w}_2$, and solving an equation identical to Equation \eqref{CubicPubRankEq} for $t_i$, where the $\mathcal{E}_i$ are quadratic rather than cubic. The attack succeeds when $\sum_{i=1}^{2s^2}t_i\mathbf{H}(\mathcal{E}_i)$ has low rank.
When this attack is applied to parameters chosen over a field with characteristic 2, it is less efficient for the same reason as the basic attack given in the previous section is less efficient for the characteristic 3 parameters: the $2s^2$ linear equations given by Equation \eqref{CubicPubRankEq} have three linear dependencies given by $\mathbf{w}_1\left[\nabla\mathcal{E}_i(\mathbf{w}_1)\right]^\top = 0,$ $\mathbf{w}_2\left[\nabla\mathcal{E}_i(\mathbf{w}_2)\right]^\top = 0,$ and $\mathbf{w}_1\left[\nabla\mathcal{E}_i(\mathbf{w}_2)\right]^\top+ \mathbf{w}_2\left[\nabla\mathcal{E}_i(\mathbf{w}_1)\right]^\top = 0,$ and the attacker must generally search through a 3-dimensional solution space of spurious solutions in order to find a 1-dimensional space of useful solutions. As a result, the complexity of the attack for characteristic 2 is $s^{2\omega}q^{s+4},$ instead of $s^{2\omega}q^{s+2},$ as it is for all other characteristics. ($\omega \approx 2.373$ is the linear algebra constant.)
However, just as with cubic ABC parameters of characteristic 3, we can add two additional linear constraints and reduce the expected dimension of the solution space to 1:
\[
\sum_{i=1}^{2s^2}t_i \mathcal{E}_i(\mathbf{w}_l) = 0, \mbox{ for } l=1,2.
\]
Thus, we can also reduce the attack complexity for quadratic ABC parameters with characteristic 2 to $s^{2\omega}q^{s+2}.$
%Equation \eqref{CubicPubRankEq} is a system of $2s^2$ equations in $2s^2$ variables; one might expect it to generally have a $0$-dimensional space of solutions. In some cases,
%however, there are linear dependencies among the equations, due to the fact that the $D^2\mathcal{E}_i$ are symmetric tensors. In even characteristic, we get 4 linear dependencies
%$D^2\mathcal{E}_i(\mathbf{x}_1,\mathbf{x}_2)(\mathbf{x}_1) = 0$, $D^2\mathcal{E}_i(\mathbf{x}_1,\mathbf{x}_2)(\mathbf{x}_2)=0$, $D^2\mathcal{E}_i(\mathbf{x}_3
%\mathbf{x}_4)(\mathbf{x}_3)=0$, and $D^2\mathcal{E}_i(\mathbf{x}_3,\mathbf{x}_4)(\mathbf{x}_4)=0$, and an additional linear dependency when we reduce the number of
%independent vectors to $3$ by setting $\mathbf{x}_1=\mathbf{x}_4$: $D^2\mathcal{E}_i(\mathbf{x}_1,\mathbf{x}_2)(\mathbf{x}_3) + D^2\mathcal{E}_i(\mathbf{x}_3,\mathbf{x}_4
%(\mathbf{x}_2)=0$, resulting in a $5$-dimensional space of solutions. In characteristic 3, reducing the number of independent vectors to $2$ results in $2$ linear dependencies among the
%equations: e.g. setting $\mathbf{x}_1 = \mathbf{x}_2$ and $\mathbf{x}_3=\mathbf{x}_4$, we have $D^2\mathcal{E}_i(\mathbf{x}_1,\mathbf{x}_2)(\mathbf{x}_1)=0$ and
%$D^2\mathcal{E}_i(\mathbf{x}_3,\mathbf{x}_4)(\mathbf{x}_3)=0$. In higher characteristic, there are no linear dependencies imposed on the equations by setting $\mathbf{x}_1 =
%\mathbf{x}_2$ and $\mathbf{x}_3=\mathbf{x}_4$.
%For characteristic 2, finding the expected $1$-dimensional space of band space solutions in a $5$-dimensional space costs $q^4+q^3+q^2+q+1$ rank operations, which in turn cost %$(s^2)^{\omega}$ field operations, where $\omega\approx 2.373$ is the linear algebra constant. Likewise, for characteristic 3, finding the expected $1$-dimensional space of band space %solutions in a $2$-dimensional space costs $q+1$ rank operations. Thus the total cost of finding a column band space map using our variant of MinRank is approximately %$q^{2s+6}s^{2\omega}$ for charactersitic 2, $q^{s+3}s^{2\omega}$ for characteristic 3, and $q^{s+2}s^{2\omega}$ for higher characteristic.
\section{Completing the Key Recovery}
%The detection of a low rank induced bilinear form $D^2\mathcal{E}(x)$ already constitutes a distinguisher from a random system of equations. Extending this calculation to a full key recovery requires further use of the differential invariant structure of the public key.
Once the MinRank instance is solved, key extraction proceeds in a similar manner to \cite[Section 6]{conf/sac/MoodyPS16} in the cubic case and \cite[Section 6]{DBLP:conf/pqcrypto/MoodyPS14}. Here we discuss the cubic version.
First, note that $U$ is not a critical element of the scheme. If $A$ is a random matrix of quadratic forms and $B$ and $C$ are random matrices of linear forms, then so are $A\circ U$, $B\circ U$ and $C\circ U$ for any full rank map $U$. Thus, since $T\circ\phi(AB||AC)\circ U = T\circ\phi((A\circ U)(B\circ U)||(A\circ U)(C\circ U))$, we may absorb the action of $U$ into $A$, $B$, and $C$, and consider the public key to be of the form
\[
P(\mathbf{x})=T\circ\phi(AB||AC)(\mathbf{x}).
\]
%Next, consider a trilinear form $D^2\mathcal{E}$ in the band space generated by $\mathcal{B}_{\beta, \gamma}$. Since the coefficients of $D^2\mathcal{E}$ are products of coefficients of $A$ and coefficients of an element of $Im(B||C)$, both of which are uniform i.i.d., there is a change of basis $M$ in which $D^2\mathcal{E}$ has the form in Figure \ref{fig:bsd} and the nonzero coefficients are uniform i.i.d.
Let $\mathcal{E}\in\mathcal{B}_{\beta,\gamma}$, and consider $\mathbf{H}(\mathcal{E})$. For $\mathbf{w}_1$ and $\mathbf{w}_2$ in the band kernel corresponding to $\mathcal{B}_{\beta,\gamma}$, there is a basis in which both $\mathbf{H}(\mathcal{E})(\mathbf{w}_1)$ and $\mathbf{H}(\mathcal{E})(\mathbf{w}_2)$ have the form illustrated in Figure \ref{fig:inducedForm}. Thus, for $s \geq 3$, with high probability the kernels of both maps are contained in the corresponding band kernel $\mathcal{B}_{\beta,\gamma}$, and span$\left\{\mbox{ker}(\mathbf{H}(\mathcal{E})(\mathbf{w}_1),\mbox{ker}(\mathbf{H}(\mathcal{E})(\mathbf{w}_2)\right\} = \mathcal{B}_{\beta, \gamma}$.
%Consider $D^2\mathcal{E}(\mathbf{x}_1)$ and $D^2\mathcal{E}(\mathbf{x}_2)$ for $\mathbf{x}_1,\mathbf{x}_2$ in the band kernel corresponding to $\mathcal{B}_{\beta, \gamma}$. Being maps from the same band space, there is a basis in which both $D^2\mathcal{E}(\mathbf{x}_1)$ and $D^2\mathcal{E}(\mathbf{x}_2)$ have the form in Figure \ref{fig:inducedForm}. Thus, with high probability for $s \geq 2$, the kernels of both maps are contained in the corresponding band kernel, $\mathcal{B}_{\beta, \gamma}$, and $\mbox{span}(\mbox{ker}(D^2\mathcal{E}(\mathbf{x}_1)\cup \mbox{ker}(D^2\mathcal{E}(\mathbf{x}_2)) = \mathcal{B}_{\beta, \gamma}$.
\begin{figure}[!ht]
\centering
\begin{tikzpicture}
\fill [fill=mediumgray] (0,0) -- (0,4) -- (4,4) -- (4,3) -- (1,3) -- (1,0) -- (0,0);
\foreach \d in {0,1,2,3,4}{%
\draw (\d,0) -- (\d,4);
\draw (0,\d) -- (4,\d);
}
\end{tikzpicture}
\caption{Structure of $\mathbf{H}(\mathcal{E})(\mathbf{w})$ when $\mathcal{E}\in\mathcal{B}_{\beta,\gamma}$ and $\mathbf{w}$ is in the band kernel corresponding to the band space $\mathcal{B}_{\beta,\gamma}$. The shaded region corresponds to nonzero coefficients.}\label{fig:inducedForm}
\end{figure}
%\begin{Rem}
%Here we have utilized a property which explicitly distinguishes differential invariant structure from rank structure.
%\end{Rem}
Given the basis for an $s^2-s$ dimensional band kernel $\mathcal{BK}$, we may choose a basis $\{v_1,\ldots, v_s\}$ for the subspace of the dual space vanishing on $\mathcal{BK}$. We can also find a basis $\mathcal{E}_{v_1},\ldots,\mathcal{E}_{v_s}$ for the band space itself by solving the linear system
\begin{align*}
\bsp
\sum_{\mathcal{E}_i}\tau_i \mathcal{E}_i(\mathbf{w}_{1})&=0,\\
\sum_{\mathcal{E}_i}\tau_i \mathcal{E}_i(\mathbf{w}_{2})&=0,\\
\vdots &= \vdots\\
\sum_{\mathcal{E}_i}\tau_i \mathcal{E}_i(\mathbf{w}_{t})&=0,\\
\esp
\end{align*}
where $t\approx 2s^2$ and $\mathbf{w}_i$ is in the band kernel.
%Given the basis for an $s^2-s$ dimensional band kernel $\mathcal{BK}$, we may choose a basis $\{v_1,\ldots, v_s\}$ for the subspace of the dual space vanishing on $\mathcal{BK}$. We can also find a basis $\mathcal{E}_{v_1},\ldots,\mathcal{E}_{v_s}$ for the band space itself by solving the linear system
%\begin{align*}
%\bsp
%\sum_{\mathcal{E}_i}\tau_i D^2\mathcal{E}_i(\mathbf{x}_{11},\mathbf{x}_{12},\mathbf{x}_{13})&=0,\\
%\sum_{\mathcal{E}_i}\tau_i D^2\mathcal{E}_i(\mathbf{x}_{21},\mathbf{x}_{22},\mathbf{x}_{23})&=0,\\
%\vdots &= \vdots\\
%\sum_{\mathcal{E}_i}\tau_i D^2\mathcal{E}_i(\mathbf{x}_{t1},\mathbf{x}_{t2},\mathbf{x}_{t3})&=0,\\
%\esp
%\end{align*}
%where $t\approx 2s^2$ and $\mathbf{x}_{ij}$ is in the band kernel.
Since the basis $\mathcal{E}_{v_1},\ldots,\mathcal{E}_{v_s}$ is in a single band space, there exists an element $\left[\begin{matrix}b_1' & \cdots & b_s'\end{matrix}\right]^{\top}$ in ColumnSpace$(B||C)$, and two matrices $\Omega_1$ and $\Omega_2$ such that
\[
\Omega_1A\left(\Omega_2\left[\begin{matrix}b_1' \\ \vdots \\ b_s'\end{matrix}\right]\right)=:A'\left(\left[\begin{matrix}v_1 \\ \vdots \\ v_s\end{matrix}\right]\right)=\left[\begin{matrix}\mathcal{E}_{v_1} \\ \vdots \\\mathcal{E}_{v_s}\end{matrix}\right].
\]
Solving the above system of equations over $\mathbb{F}_q[x_1,\ldots,x_{s^2}]$ uniquely determines $A'$ in the quotient $\mathbb{F}_q[x_1,\ldots,x_{s^2}]/\left$. To recover all of $A'$, note that the above system is part of an equivalent key
\[
\mathcal{F}=T'\circ A'(B'||C')
\]
where $\left[\begin{matrix}v_1 & \cdots & v_s\end{matrix}\right]^{\top}$ is the first column of $B'$.
Applying $T'^{-1}$ to both sides and inserting the information we know we may construct the system
\beq \label{SolveForBCT}
A'(B'||C')=T'^{-1}\mathcal{F}.
\eeq
Solving this system of equations modulo $\left$ for $B'$, $C'$ and $T'^{-1}$ we can recover a space of solutions, which we will restrict by arbitrarily fixing the value of $T'^{-1}$. Note that the elements of $T'^{-1}$ are constant polynomials, and therefore $T'^{-1} (\mbox{mod} \left)$ is the same as $T'^{-1}$. Thus, for any choice of $T'^{-1}$ in this space, the second column of $T'^{-1}\mathcal{F}$ is a basis for a band space. Moreover, the elements $v'_{s+1},\ldots,v'_{2s}$ of the second column of $B' (\mbox{mod} \left)$ are the image, modulo $\left$, of linear forms vanishing on the corresponding band kernel. Therefore, we obtain the equality
\[
\left(\bigcap_{i=1}^{s}\mbox{ker}(v_i)\right) \bigcap \left(\bigcap_{i=s+1}^{2s}\mbox{ker}(v'_i)\right)=\mathcal{BK}_2 \cap \mathcal{BK}_1,
\]
the intersection of the band kernels of our two band spaces.
We can reconstruct the full band kernel of this second band space using the same method we used to obtain our first band kernel. We take a map $\mathcal{E}_2$ from the second column of $T'^{-1}\mathcal{F}$, and two vectors $\mathbf{w}_a$ and $\mathbf{w}_b$ from $\mathcal{BK}_2 \cap \mathcal{BK}_1$, and we compute
$\mathcal{BK}_2 =$ span$\left\{\mbox{ker}(\mathbf{H}(\mathcal{E}_2)(\mathbf{w}_a)\cup \mbox{ker}(\mathbf{H}(\mathcal{E}_2)(\mathbf{w}_b)\right\}$. We can now solve for the second column of $B'$,
$\left[\begin{matrix}v_{s+1} & \cdots & v_{2s}\end{matrix}\right]^{\top}$, uniquely over $\mathbb{F}_q[x_1,\ldots,x_{s^2}]$ (NOT modulo $\left$) by solving the following system of linear equations:
\begin{align*}
v_i \equiv v'_i \mbox{mod} \left,\\
v_i(\mathbf{w}_1)=0,\\
v_i(\mathbf{w}_2)=0,\\
\vdots = \vdots\\
v_i(\mathbf{w}_{s^2-s})=0,\\
\end{align*}
where $ i=s+1, \ldots, 2s$, and $\left\{\mathbf{w}_1, \ldots, \mathbf{w}_{s^2-s}\right\}$ is a basis for $\mathcal{BK}_2$. We can now solve for $A'$ (again, uniquely over $\mathbb{F}_q[x_1,\ldots,x_{s^2}]$) by solving:
\[
A'\left(\left[\begin{matrix}v_1 \\ \vdots \\ v_s\end{matrix}\right]\right) \equiv \left[\begin{matrix}\mathcal{E}_{v_1} \\ \vdots \\ \mathcal{E}_{v_s}\end{matrix}\right]
\mbox{mod} \left,
\]
\[
A'\left(\left[\begin{matrix}v_{s+1} \\ \vdots \\ v_{2s}\end{matrix}\right]\right) \equiv \left[\begin{matrix}\mathcal{E}_{v_{s+1}} \\ \vdots \\ \mathcal{E}_{v_{2s}}\end{matrix}\right]
\mbox{mod} \left,
\]
where $\left[\begin{matrix}\mathcal{E}_{v_{s+1}} & \cdots & \mathcal{E}_{v_{2s}}\end{matrix}\right]^{\top}$ is the second column of $T'^{-1}\mathcal{F}$. This allows us to solve Equation \eqref{SolveForBCT} for the rest of $B'$ and $C'$, completing the attack.
The primary cost of the attack involves finding the band space map. The rest of the key recovery is additive in complexity and dominated by the band space map recovery; thus the total complexity of the attack is of the same order as the band space map recovery. Hence, the cost of private key extraction is approximately $q^{s+2}s^{6}$ for all characteristics. %We note that with these parameters we can break full sized instances of this scheme with parameters chosen for the $80$-bit and $100$-bit security levels via the criteria presented in \cite{DBLP:conf/pqcrypto/DingPW14}.
The original parameters of Cubic ABC were designed for a security level of $80$-bits and $100$-bits. Since NIST has been recommending a security level of $112$-bits since 2015, see \cite{SP800-131A}, these figures may be a bit out of date. In fact, our attack seems more effective for larger parameter sets than small.
We note that our attack breaks CubicABC($q=2^8,s=7$), designed for $80$-bit security, in approximately $2^{88}$ operations. More convincingly, our attack breaks CubicABC($q=2^8,s=8$), designed for $100$-bit security, in approximately $2^{98}$ operations, indicating that for parameters as small as these, we have already crossed the threshold of algebraic attack efficiency. Furthermore, the attack is fully parallelizable and requires very little memory. Hence, this technique is asymptotically far more efficient than algebraic attacks, the basis for the original security estimation in \cite{DBLP:conf/pqcrypto/DingPW14}.
In the case of the quadratic ABC scheme, the original $86$-bit secure parameters ABC$(q=2^8,s=8)$. The attack complexity with the new methodology presented here is $2^{87}$, just above the claimed level. We note, however, that the authors of \cite{DBLP:conf/pqcrypto/TaoDTD13} supplied additional parameters using odd characteristic in their presentation at PQCRYPTO 2013, see \cite{abcpres}, with a claimed security level of $108$-bits. This scheme, ABC$(q=127,s=8)$ offers resistance only to the level of $2^{77}$ to our slight improvement in technique over that of \cite{DBLP:conf/pqcrypto/MoodyPS14}. Thus, our attack definitively breaks these parameters.
\section{Comparison with Minors Methods}
The MinRank problem has been a central computational challenge related to the security of various multivariate schemes since the beginning of the century, and as discussed in the previous section, is the primary bottleneck of our attack. There are two main disparate techniques for solving MinRank.
The first technique, which we employ here, can be called ``linear algebra search.'' The linear algebra search technique randomly selects vectors $\mathbf{x}_1,\ldots,\mathbf{x}_\ell\in k^n$ in an attempt to solve a system of equations of the form:
\[
\left(\sum_{i=1}^mt_i\mathbf{M}_i \right) \mathbf{x}_j=\mathbf{0}\mbox{ for }j\in\{1,\ldots,\ell\}.
\]
The technique is essentially free in terms of memory, but is exponential in $q$, the size of $k$. The linear algebra search can benefit from certain exponential speedups depending on the structure of the equations. In particular, the linear algebra search is exponentially faster in the case of ``interlaced kernels'' as specified in \cite{DBLP:journals/dcc/BettaleFP13} or in the case of differential invariants, as in the case of the original ABC scheme, see \cite{DBLP:conf/pqcrypto/MoodyPS14}.
The second technique is known as minors modeling. Given an instance of minrank, $\mathbf{M}_1,\ldots,\mathbf{M}_m$ with target rank $r$, construct the matrix
\[
\sum_{i=1}^my_i\mathbf{M}_i,
\]
with entries in $k[y_1,\ldots,y_m]$. Since there is an assignment of values of the $y_i$ in $k$ such that the resulting matrix has rank $r$, via the Finite Field Nullstellensatz, the system of $r+1\times r+1$ minors of this matrix, along with the field equations $y_i^q-y_i$, form a positive dimensional ideal. Fixing a variable to a nonzero value by adding another equation, say $y_1-1$ still statistically results in a nonempty variety containing solutions to the MinRank problem.
The complexity of the minors modeling technique is dependent upon the degree of regularity of minors system, though this can easily be seen for large systems to be r+1, since for sufficiently large schemes the application of a Gr\"obner basis algorithm is equivalent to linearization. Thus the complexity is $\mathcal{O}\left({m+r+1 \choose r+1}^\omega\right)$, where $\omega$ is the linear algebra constant. A serious drawback of this technique is memory usage, which also nontrivially complicates the practical time complexity. The space complexity of the minors approach can be roughly estimated as $\mathcal{O}\left({m+r\choose r+1}^2\right)$.
To make a direct comparison of these techniques for the MinRank portion of the attack, we use the parameters $q=2^8$ and $s=8$ discussed in the previous section. Recall that the linear algebra search technique requires memory on the order of $s^4q=2^{20}$ and that the time complexity is about $2^{87}$. For the minors modeling method, the space complexity can be computed from the above estimates using $m=2s^2=128$ and $r=2s=16$ to be about $2^{144}$, roughly the square root of the number of subatomic particles in the universe, and the time complexity is $2^{172}$. We thus conclude that for such small values of $q$ that the linear algebra search, due to the interlaced nature of the kernels, is far more efficient. Furthermore, for ABC schemes, it is questionable whether the memory constraints for the minors approach can ever be realistic.
\section{Experiments}
Using SAGE \cite{sagemath}, we performed some experiments as a sanity check to confirm the efficiency of our ideas on small scale variants of the Cubic ABC scheme. The computer used has a 64 bit quad-core Intel i7 processor, with clock cycle 2.8 GHz. Rather than considering the full attack, we were most interested in confirming our complexity estimates on the most costly step in the attack, the MinRank instance. Given as input the finite field size $q$, and the scheme parameter $s$, we computed the average number of vectors $v$ required to be sampled in order for the rank of the $2$-tensor $\mathbf{H}(\mathcal{E})(v)$ to fall to $2s$. As explained in Section 4, when the rank falls to this level, we have identified the subspace differential invariant structure of the scheme which can then be exploited to attack the scheme.
As this paper is only concerned with binary fields, we ran experiments with $q=2, 4$ and $8$. We found that for $s=3$ and $q=2,4$, or $8$, with high probability only a single vector was needed before the rank fell to $2s$. For $s=4$ and $s=5$, the computations were only feasible in SAGE for $q=2$ and $q=4$. The average values obtained are presented in the table below. Note that for $q=4$ and $s=5$ the average value is based on a small number of samples as the computation time was quite lengthy.
\begin{table}
\centering
\begin{tabular}{|c| c c c| c c c|}
\hline
& $s=4$ & & $(q-1)^2q^s$ & $s=5$ & & $(q-1)^2 q^s$ \\
\hline
$q=2$ & 24 & & 16 & 35 & & 32 \\ \hline
$q=4$ & 1962 & & 2304 & 7021 & & 9216 \\
\hline
\end{tabular}
\caption{Average number of vectors needed for the rank to fall to $2s$ versus the predicted values.}
\label{table:1}
\end{table}
In comparison, our previous experiments \cite{conf/sac/MoodyPS16} were only able to obtain data for $q=2$ and $s=4,5$. The average number of vectors needed in the $s=4$ case was 244, while for $s=5$, the average number in our experiments was 994 (with the predicted values being 256 and 1024).
%Using SAGE \cite{sage}, we performed some minrank computations on small scale variants of the Cubic ABC scheme. The computations were done on a computer with a 64 bit quad-core Intel i7 processor, with clock cycle 2.8 GHz. We were interested in verifying our complexity estimates on the most costly step in the attack, the MinRank instance, rather than the full attack on the ABC scheme. Given as input the finite field size $q$, and the scheme parameter $s$, we computed the average number of vectors $v$ required to be sampled in order for the rank of the $2$-tensor $D^2\mathcal{E}(v)$ to fall to $2s$. As explained in Section \ref{sec:modminrank}, when the rank falls to this level, we have identified the subspace differential invariant structure of the scheme and can exploit this structure to attack the scheme. Our results for odd $q$ are given in Table 1.
%
%\begin{table}
%\centering
%\begin{tabular}{|c| c c| c c| c c|}
% \hline
% & $s=3$ & $(q-1)^2q^s$ & $s=4$ & $(q-1)^2q^s$ & $s=5$ & $(q-1)^2q^s$\\
% \hline
% $q=3$ & 14.75 & 108 & 333 & 324 & 952 & 972\\ \hline
% $q=5$ & 378 & 2000 & 9986 & 10000 & &\\ \hline
% $q=7$ & 1688 & 12348 & 72951 & 86436 & & \\ \hline
% $q=9$ & 606 & 46656 & & & & \\ \hline
% $q=11$ & 13574 & 133100 & & & &\\
% \hline
%\end{tabular}
%\caption{Average number of vectors needed for the rank to fall to $2s$ (for odd $q$)}
%\label{table:1}
%\end{table}
%For higher values of $q$ and $s$ the computations took too long to produce sufficiently many data points and obtain meaningful results with SAGE. When $q$ is odd, our analysis predicted the number of vectors needed would be on the order of $(q-1)^2q^s$. Table 1 shows the comparison between our experiments and the expected value. We see that for $s=3$, the rank fell quicker than expected, while for $s>3$ the results are quite near the predicted value. This is because when $s=3$ our complexity estimates given in Section \ref{sec:modminrank} are simply not accurate enough, which happens for small values of $q$ and/or $s$.
%
%For even $q$, we also ran some experiments. We found that for $s=3$ and $q=2,4$, or $8$, with high probability only a single vector was needed before the rank fell to $2s$. For $s=4$ and $s=5$, the computations were only feasible in SAGE for $q=2$. The average number of vectors needed in the $s=4$ case was 244, with the expected value being $(q-1)^2q^{2s}=256$. With $s=5$, the average number in our experiments was 994 (although the number of trials was small), with the expected value 1024. For higher values of $q$ and $s$ the computations took too long to obtain meaningful results.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Conclusion}
The ABC schemes offer an interesting new technique for the construction of multivariate public key schemes. Previously, we have used the multiplicative structure of an extension field to generate an efficiently invertible map. Schemes built on such a construct are known as ``big field'' schemes. The ABC framework is essentially a ``large structure'' or perhaps ``large algebra'' scheme, depending on multiplication from a matrix algebra over the base field. Since the only simple algebras are either matrix algebras or field extensions, we seem to have exhausted the possibilities. Interestingly, MinRank techniques seem optimal in this setting, at least asymptotically in the dimension of the extension.
Also interesting to note is the fact that the authors present in \cite{DBLP:conf/pqcrypto/DingPW14} a heuristic security argument for the provable security of the scheme and reinforce the notion of provable security in this venue at the presentation of the scheme at \cite{DBLP:conf/pqcrypto/2014}. Unfortunately, this analysis does not contribute a sound conclusion, as demonstrated by the methodology of \cite{conf/sac/MoodyPS16}. With our improved attack, we rule out the possibility that the cubic variant of ABC offers any security advantage over the original quadratic scheme. Likewise, our improved attack on quadratic ABC eliminates any security benefit associated with characteristic-2 parameters in the quadratic case.
%The ABC scheme offers a promising new idea for the development of multivariate encryption schemes. Although the original presentation of the scheme contained errors--- most significantly in the estimated probability of decryption failure--- the scheme is easily generalized to nonsquare matrices and these anomalies are inconsequential in this context. In particular, the HOLEs attack is nonexistent when $A$, $B$, and $C$ are replaced with rectangular matrices.
%
%The attack outlined in this article exploits the subspace differential invariant structure inherent to the ABC methodology. The attack method works both for the original scheme and when applied to the updated scheme. With the original parameters, the attack is asymptotically the most efficient structural attack, with bit complexity scaling linearly with $s$, the square root of the number of variables. In the improved scheme, the attack scales in bit complexity in proportion to the parameter $r$ which is less than the square root of the number of variables. This analysis is tighter than any relevant rank analysis in the literature, with the most appropriate technique in \cite{DBLP:conf/acisp/YangC05} scaling in bit complexity linearly with $2s$. In comparison, even the bit complexity of algebraic attacks scale superlinearly in $s$, though the break-even point for the two attacks is slightly beyond the 120-bit security threshold. Taking both time and memory into consideration, however, the differential invariant attack may be the more practical.
%
%A remarkable fact about the attack outlined in this article is that it exploits characteristics which uniquely distinguish the public polynomials in the ABC scheme or its improvement from random formulae, namely, the existence of the $s$ subspace differential invariants. The existence of the differential invariants relative to the band spaces is \emph{equivalent} to the property of being isomorphic to a product of matrices of linear forms as in the central map of the ABC scheme; indeed, the attack produces such an isomorphism. In this sense, it is hard to imagine any key recovery attack on such a scheme designed for $80$-bit security which is significantly more efficient in terms of time than the algebraic attack, directly solving the system via Gr\"{o}bner Bases, or an XL variant such as the Mutant XL algorithms, see \cite{mxl,mxl2,DBLP:conf/icisc/MohamedCDBB09}.
%
%On the other hand, it is worthwhile mentioning Gr\"{o}bner basis techniques for solving MinRank problems using minors modeling as in \cite{DBLP:conf/issac/FaugereDS10}, and perhaps most notably exemplified in \cite{DBLP:journals/dcc/BettaleFP13}. Assuming no additional structure in the MinRank instances arising from the cryptanalysis of the ABC scheme generic, the degree of regularity of the resulting MinRank polynomial systems is $2s+1$ for small values of $s$, and so the complexity of this approach is immense. The actual MinRank instances arising from the ABC scheme, however, hold some of the structure of the central map and so there is some hope for improvement in this area, though this remains an open problem.
%
%While it is clear that the decryption failure issue of the ABC scheme can be fixed by inflating the field size and/or by making the core matrices rectangular, the scalability of the scheme is an issue. The public key size of the original scheme scales with the \emph{sixth} power of $s$. If we take into consideration security requirements beyond $80$ bits, the ABC scheme becomes problematic; increasing $s$ by one more than doubles the key size. While the evidence seems to suggest that the enhanced ABC scheme, despite having such a distinct differential structure, may ironically be secure, the task of turning the scheme into a more finely tuneable technology is still an open question.
% ===========================================================================
\bibliographystyle{splncs}
\bibliography{References}
\end{document}