Tuesday, May 5, 2015

Eurocrypt 2015: Threshold Implementations

On Thursday morning, Vincent Rijman gave an invited talk on Threshold Implementations designed to be more secure against side-channel attacks. He started off discussing Shannon's Product Cipher[1](like a modern SP network). From a cryptanalysis point of view, this can be very difficult to break due to high diffusion. What this means is that if one character of plaintext is changed then several characters of ciphertext will be changed. Similarly, changing a character of the ciphertext will change several characters of plaintext. However, if we introduce side-channel attacks[2], attacks based on gaining information from the physical implementation of a cryptosystem such as timing information, power consumption, or the example given by Vincent, electromagnetic radiation. 

Changing the value of a logic cell from `0' to `1' consumes energy. The amount of energy the hardware consumes can depend on many things, such as transister design, process variations, length of connection line, crosstalk between lines and many more. In order to counter these attacks a few methods have been devised; 
  • The first is to try to balance the power consumption equally, this means it limits the amount of information gained from the power consumption. 
  • The next is Leakage-Resilient Cryptography such as Ephemeral Keys[3]. 
  • Finally, the method of masking. This splits the sensitive information into $d+1$ shares where $d$ is called the masking order. 
This talk concentrated on the latter technique. In the 2003 paper by Ishai et. al [4] they use masking in private circuits. In 2003 Trichina [5] wrote a paper on a masked multplier. But on its own this is not enough, since there are transient effects in hardware. What this means is that changing values takes time. We call this a transition period. These delays depend on the circuit lay-out. Transient effects account for almost all the power consumption of CMOS[6] circuits. The propogation of these transient effects depends on the input of the combinational circuit, and these dependencies are non-linear. Thus, we have a security breakdown.

Vincent then introduced a threshold implementation that doesn't rely on the behaviour of hardware implementations of combinational logic. The setup assumes that combinational logic leaks information on all its inputs. They do this by the method of secret sharing. An example of this is if we have a function $f$ and an input $x$ giving the output $y$. I.e. $y = f(x)$. We can split the function and hence the input into 3 seperate parts $f_1, f_2, f_3$ and $x_1, x_2,x_3$, respectively. We then create the setup such that the inputs of each split function has only 2 of the split inputs. So that we have
$$
f_1(x_2, x_3) = y_1, \\
f_2(x_1, x_3) = y_2, \\
f_3(x_1, x_2) = y_3.
$$

Where
$$
y_1+y_2+y_3 = y, \\
x_1+x_2+x_3 = x.
$$
This looks extremely similar to multi-party computation. The main theorem that follows from this is: $\textit{The average power consumption of the circuit is independent of x.}$ An extremely informal proof of this is as follows. For each $i$, $f_i$ is independent of the input $x_i$, the power consumption of $f_i$ is independent of $x_i$. If $(x_{i-1}, x_{i+1})$ is independent of $x$, then the power consumption of $f_i$ is independent of $x$. The average power consumption of the circuit is equal to the sum of the average power consumptions of $f_i$. Hence, the power consumption of the circuit is independent of $x$. A few assumptions were necessary for this. The $x_i$'s need to be uniformally random. The implementation of the $f_i$'s depend only on $(x_1, \ldots, x_{i+1}, x_{i-1}, \ldots, x_n)$ and finally we obviously need suitable $f_i$ to exist! For linear $f$ it's easy to have the property
$$
f_1(x_2, x_3, \ldots, x_n) +  \ldots + f_n(x_1, x_2, \ldots, x_{n-1}) = f(x_1, x_2, \ldots, x_n)
$$
however for most interesting $f$ this is a research problem. A simple example of this is the multiplier. We have
$$
z = f(x,y) = x \cdot y \\
z_1 = f_1(x_2,x_3,y_2,y_3) = x_2 y_2 + x_2 y_3 + x_3 y_2, \\
z_2 = f_1(x_1,x_3,y_1,y_3) = x_3 y_3 + x_1 y_3 + x_3 y_1, \\
z_3 = f_1(x_1,x_2,y_1,y_2) = x_1 y_1+ x_1 y_2 + x_1 y_3.
$$
This is secure, even with transient effects. If we consider some facts we can gain about arbitrary functions. The size of the hardware increases with the number of shares. Functions with algebraic degree $n$ requires $n+1$ shares and strong ciphers have high algebraic degree. What this means is that the stronger the cipher, the higher the algebraic degree and hence the amount of shares required increases; meaning the larger the hardware required. 

A potential solution to this is using registers. The combinational logic between registers has lower algebraic degree and registers limit the propagation of transient effects. For this to work we need a few more assumptions. The inputs of each stage need to be uniformly distributed. The input of the second step must equal the output of the first step and the outputs of the first step need to be uniformly distributed. This can be done by remasking, or an extra property for $f_i$ is needed. This property is as follows; every $y_i$-tuple for the same $y$ must get an equal amount of "hits". The original multiplier does not fit this extra assumption, some binary representations of $y$ are actually "hit" more often. Changing the definition of the multiplier to what follows solves this.

$$
z_1 = (x_3 + x_4)(y_2 + y_3) + x_2 + x_4, \\
z_2 = (x_1+ x_3)(y_1 + y_4) + y_1 + x_4, \\
z_3 = (x_2 + x_4)(y_1 + y_4) + x_2, \\
z_4 = (x_1 + x_2)(y_2 + y_3)  + y_1.
$$
Further work suggested by Vincent is security against fault attacks, which induce hardware failure while measuring signals. Other suggested future work is security against attacks using non-linear combinations of signals measured at different times.

[1] - http://en.wikipedia.org/wiki/Product_cipher
[2] - http://en.wikipedia.org/wiki/Side-channel_attack
[3] - http://en.wikipedia.org/wiki/Ephemeral_key
[4] - http://www.cs.berkeley.edu/~daw/papers/privcirc-crypto03.pdf
[5] - https://eprint.iacr.org/2003/236.pdf
[6] - http://en.wikipedia.org/wiki/CMOS

No comments:

Post a Comment