Tuesday, July 10, 2012

Study Group: Another Look at HMAC

Today's study group was given by Joop and Enrique on Koblitz and Menezes's paper "Another Look at HMAC" [pdf].  This was also part of the invited talk given by Menezes at Eurocrypt this year [video].  Like several of the papers in the "Another look" series this paper has provoked much debate in the crypto community.  Here we will give an overview of the paper.

A Message Authentication Code (MAC) is a function H which when given a key K and message M outputs a tag t=H(K,M).  This tag shall be sent along with the message and allows the recipient to verify that he receives the real message and that it has not been altered in any way.  HMAC is the most widely used hash-based message authentication code (MAC) and was first proposed by Bellare, Canetti and Krawczyk (BCK) in 1996 [pdf].  In this paper they introduce the NMAC or Nested MAC construction.

Let f be the compression function of the interated hash function hIV with initialisation vector IV. Then NMAC is defined as H(K,M)=hK2(hK1(M)0), where K={K1,K2}, each of c bits.  We assume that the size of M is a multiple of the blocksize b and we let the superscript 0 denote padding with zeros to a multiple b.  Despite having no obvious security issues some engineers objected to this construction because (1) it needs two keys and (2) it requires that the IV changes.  As a result of this HMAC was proposed where H(K,M)=hIV(K20||hIV(K10||M)0). Here K1 and K2 are simply obtained by XORing two fixed strings ipad and opad with the master key K.

BCK give a proof for NMAC which can be extended to prove the security of HMAC.  The proof demonstrates that NMAC is secure when (i) the compression function f is a secure MAC and (ii) the hash function hIV is (weakly) collision resistant. But in 2005 Wang et al. found collisions on both SHA-1 and MD5 [pdf] [pdf] which meant that the existing proof would not actually apply with hash functions used in practice.  As a result the following year Bellare [pdf] presented a new proof for NMAC/HMAC which showed that if the compression function f is a PRF then NMAC is also a PRF.

Let us now describe the results presented by Koblitz and Menezes (KM).  KM consider three different versions of HMAC: 
  1. HMACBell the version defined by Bellare. 
  2. HMACstd the version from the standard [RFC2104], where the key length is 128 or 160 bits.
  3. HMACNIST the NIST standardised version [pdf] where the key length is 80 bits.
KM's first result is to show a seperation between HMACBell and HMACstd.  Let f:{0,1}cx{0,1}b{0,1}c be a compression function which meets the conditions necessary for Bellare's proof to hold.  We can now construct a new compression function f* which is the same as f except when the last b-c bits of the input agree with the last b-c bits of the padding values opad or ipad, in which case it outputs 0. For HMACBell this will have no effect since the key is still masking the padding vector.  In HMACstd since the key is only 128 or 160 bits it is padded with zeros before being XORed with opad and ipad.  Therefore the final bits of K1 and K2 are the same as ipad or opad. The compression function will therefore output 0 and the tag will now be independent of the keys.

The second result shown is that HMACNIST will not be secure in the multi-user setting. Assume we have 2a users.  The adversary chooses a random message M and ask all users for a tag on this message.  Next the adversary chooses random keys K and computes H(K,M) searching for a collision with one of the user's tags.  Finally if the adversary finds a collision it is likely she has determined the key for that user (in some cases the collision may also be due to a small tag length).  The expected number of keys the adversary must check before finding a collision is 280-a.  Considering security in the multi-user setting is a practical aspect which needs to be further considered from a provable context since many proofs of security only apply in the single-user setting.

The final sections of the paper present KM's concerns with Bellare's proof of security and their own new proof of security.  Their main concern is related to the final "coin fixing" step in Bellare's proof, where he maximises the advantage of the adversary by hard-wiring two message M1* and M2* into its code.  It is mathematically true to say that such a pair exists.  The problem here is how this pair of messages can be found.  KM's objection here arises because Bellare's proof is in the non-uniform model of complexity.  In this model a proof can use an algorithm which exists mathematically but may not be easy to find.  This has implications for our ability to provide concrete security results.  Further discussion on the non-uniform model and its implications to provable security can found here by Koblitz and Menezes, and here by Bernstein and Lange.

Tuesday, July 3, 2012

Automatic Analysis of One-Encryption Key Exchange

Last day of the CSF conference in Harvard, Bruno Blanchet gave a talk concerning the automatic proof of the password-based protocol One-Encryption Key Exchange (OEKE). The proof has been carried out by using Blanchet's tool CryptoVerif.

In order to prove the security of OEKE protocol, CryptoVerif has been extended to handle proofs based on the computational Diffie-Hellman assumption and Shoup’s lemma.

In particular, for applications of Shoup’s lemma, he showed how to improve over the standard computation of probabilities. The idea is that of avoiding to count several times those probabilities that correspond to the same execution. This technique resulted in a proof of OEKE with a better reduction factor compared with  a previous manual proof in [Bresson, Chevassut, Pointcheval, Security proofs for an efficient password-based key exchange, CCS’03].

Retroactive Security

The second invited talk at the CSF conference has been given by Butler Lampson, Microsoft Research.
The talk gave an interesting point of view about retroactive security as opposed to the desire of reaching perfect security which tries to avoid any possible malicious attack in advance.

The audience listened with a fair degree of astonishment!

The main point of Butler was that fifty years of security based mainly on access control means, has been a failure. Therefore, he proposes to learn from real life and then apply proactive security: ``burglars are not scared of locks but of punishments`` he remarked. In real life, the financial system is 'secure' mainly because almost any transaction can be undone and intruders likely go to jail.

In Butler's opinion the role of access control should be much more coarse-grained while authentication and auditing are very much needed. To those that asked how to handle some particular critical scenarios Butler replied that retroactive security will not be perfect, but it will lead to a better situation than that we have now.

Study group: Algebraic side-channel analysis

Today's study group was presented by Carolyn and Valentina on the subject of Algebraic Side Channel Analysis (ASCA).

The goal of algebraic cryptanalysis is to express a cryptosystem as a set of equations in function of its (known) inputs and outputs and its (unknown) secret keys, and to solve that system. Take, for example, the block cipher PRESENT:
64-bit blocks: P1,...,P64 (plaintexts), C1,...,C64 (ciphertexts).
80-bit key: K1,...,K80.
31 rounds of nonlinear substitutions.
Of course, theoretically one can write (and try to solve):
C1 = f1(P1,...,P64,K1,...,K80)
C2 = f2(P1,...,P64,K1,...,K80)
...
C64 = f64(P1,...,P64,K1,...,K80)
Obviously, though, any such cipher is especially designed to make this hard to solve. Diffusion in the system ensures that each of the above polynomial expressions involves every single bit of the plaintext and key, and the nonlinear S-Box substitions (repeated over a large number of rounds) ensures that there are lots of high degree monomials.

The strategy for algebraic cryptanalysis is therefore to attempt to simplify this system into a large number of small, low degree polynomial equations. This can be helped by introducing dummy variables for intermediate values:
xi -- each input bit of each S-Box (in each round);
yi -- each output bit of each S-Box (in each round);
ki -- each bit of each roundkey.
The authors of the first paper we looked at (Renauld and Standaert, 'Algebraic Side-Channel Attacks' [link]) thus transformed the PRESENT cipher into a system of 40,000 equations in 7,000 variables (50,000 monomials). Of course, it was very sparse -- in matrix form only 0.0155% of entries were non-null.

To help solve this system more efficiently they propose introducing side-channel leakage measurements as extra constraints to make the system more overdefined. Whereas side-channel analysis (SCA) is normally used to recover the key directly, they aim at simpler targets -- for example, the Hamming weights of all the S-Box inputs and outputs in each round -- and express them as additional simultaneous equations.

There are substantial differences between ASCA and SCA: The latter requires predicting intermediate values and so typically focuses on only the first or last round, as diffusion in the system makes intermediate rounds problematic to predict. ASCA does not need to do this and so can use measurements from all the rounds. SCA is very data heavy, and exploits only part of the cipher structure, while ASCA exploits the cipher structure more thoroughly, using fewer trace measurements (often only one) but more points within a trace. Lastly, SCA takes a divide-and-conquer strategy, targetting small bits of the key in succession, whilst ASCA is aimed at recovering the entire key in one go.

Because an S-Box is naturally expressed as a vector of Boolean polynomials an intutitive approach to solving the system (as described in this first paper) is to treat it as a satisfiability problem and feed it to a SAT solver (converting it into conjunctive normal form along the way). They demonstrate good results in the PRESENT case, and even find attacks which work against masking or without knowledge of the plaintext/ciphertext pairs.

However, such an approach is extremely intolerant to errors in the side-channel measurements, which produce conflicts in the system so that the SAT solver will reject the problem as unsatisfiable. In practice it is very hard to recover perfect side-channel information due to the presence of electronic and switching noise in the targeted device, and errors produced by the measurement set-up. The second paper we looked at (Oren et al, 'Algebraic Side-Channel Analysis in the Presence of Errors' [link]) is focused on developing a more error-tolerant methodology.

They propose framing the problem as a pseudo-Boolean optimisation (PBOPT) task rather than a SAT instance. The primary advantage of such an approach is that rather than a hard yes/no decision problem we simply seek a solution which minimises an objective function. Moreover, constraints involving integers (for example, Hamming weights) become much easier to express than in a SAT instance, and extra variables can be assigned to represent error quantities. Since there is a yearly competition for PBOPT solvers there are a wide range of increasingly efficient implementations available. Indeed, their strategy produces realistic attacks against KeeLoq and up to 3 rounds of AES.

The third paper we looked at (Carlet et al, 'Analysis of the Algebraic Side-Channel Attack' [link]) was concerned with the problem of quantifying resistance to ASCA (in the error-free scenario for now). The previous two papers largely focus on experimental, proof-of-concept type results. Indeed, the very nature of SAT problems makes theoretical analysis problematic: SAT is NP-complete in the general case, and although efficient solutions exist for large subsets of instances, there is no reliable way of determining whether a given instance can be efficiently solved without trying it.

So, in the absence of concrete statements about attack complexity Carlet et al propose a criteria for ASCA resistance which they argue is meaningfully indicative of complexity (which argument is supported by their experimental results). They revisit the notion of algebraic immunity (AI) of an S-Box: the degree of the lowest degree polynomial in the ideal generated by the set of equations representing the S-Box (and the field equations). The smaller this is, the more vulnerable is an S-Box (and a cipher which uses it) to algebraic cryptanalysis. The AES S-Box, for example, has an AI of 2 -- there are no linear relations in the associated ideal. Also of importance is the number of independent equations of that lowest degree.

They then look at what happens when side-channel information is added, and define the 'AI with leakage' as the degree of the lowest degree polynomial in the ideal generated by the S-Box, the field equations, and the equations representing the leakage information. They show that Hamming weight and Hamming distance leakage functions both reduce the 'AI with leakage' to 1 for any S-Box, although the number of independent linear equations varies depending (inversely) on the size of the pre-images of the leakage values. (For example, these pre-images are smaller -- and the number of linear relations larger -- in the case that input and output Hamming weight pairs are known than in the case that only the Hamming distances between the input and output pairs are known). They show that any optimally resistant S-Box (i.e. one minimising the number of linear relations) has a nonlinearity of zero and is therefore vulnerable to linear cryptanalysis. This concords with previous results about resistance to SCA and resistance to classical cryptanalysis: it seems that there is an inevitable trade-off (see, for example, Prouff 'DPA Attacks and S-Boxes' [pdf]). In the end, the authors conclude that the best way to defend against ASCA is simply to increase the bus size, as this produces exponential growth in the cardinality of the leakage pre-images.

Monday, July 2, 2012

FlipIt: "A model of the British weather"

In the latest study group, Nigel and Theo talked about FlipIt: The Game of "Stealthy Takeover", the subject of the 2011 IACR Distinguished Lecture given by Ron Rivest at CRYPTO. FlipIt is a two-player game that aims to model various computer security scenarios, allowing these scenarios to be analysed using a game-theoretic framework. Van Dijk, Juels, Oprea and Rivest formally define FlipIt and show that there are many possible interesting variants of the game that correspond to different applications. They also show that even in simple cases the analysis can become quite complex.

To understand the FlipIt game in its most simple form, consider the following example implementation. There are two players in seperate rooms, as well as a 'resource' that neither of the players can see. This resource can be 'controlled' or 'owned' by either of the players. Both players have a control panel with a light and a single button which they can press at any time. When a player moves, i.e., presses their button, they are given ownership of the resource. If they did not own the resource before they moved, the light on their control panel will flash. This is called a 'takeover'. When the light does not flash after a player moves, this means the resource was already under their control.

An important feature of this game is that neither player can see who is currently in control of the resource, nor can they see when the other player presses their button to take control, hence the description "Stealthy Takeover". The players can only find out who owns the resource by pressing the button. Players gain 'points' for every second that they own the resource. However, making a move comes with a certain cost. The goal of the players is to maximize their own score, which consists of the total time they owned the resource minus the cost of the player's combined moves. To do this, both players implement some sort of strategy that decides when they should press the button.

What makes this game interesting is that it models the behaviour of several applications in the area of computer security. For example, FlipIt can be used as a macro-level game to model Advanced Persistent Threats, where one player (the 'defender') tries to keep his sensitive assets (which are modeled as an aggregate FlipIt resource) secure from the other player (the 'attacker'). Here, a move by the attacker corresponds to a campaign that results in the attacker breaching the system and gaining 'control' of the sensitive assets. A move by the defender corresponds to a system-wide remediation campaign, such as patching all machines, resetting passwords, reinstalling servers and so on.

There are many more applications, especially when moving away from the simple implementation mentioned before and introducing additional elements. For example, it is possible that players can make several moves, each of which as a different cost and efficacy in gaining ownership of the resource. An attacker might have access to a known exploit but also a zero-day exploit. If he chooses to take control of the resource using the known exploit, the defender can regain control by applying a security patch that fixes this known exploit. However, if the attacker chooses to use the more costly zero-day exploit, the defender is forced to reinstall the software of the resource, which is also more costly than applying a patch. This gives the players several options to consider when choosing their strategy.

It is also interesting to consider variations of the game where at least one of the players gets additional feedback from pressing the button, which can be incorporated in his strategy. For example, the player could learn the last time that his opponent played ('Last Move') or he could learn the complete history of moves made by both players so far ('Full History'). When a player gets no additional feedback, his strategy must be fixed at the start of the game ('Nonadaptive'). Another interesting variation is where one of the players receives feedback before the first move, such as the strategy of his opponent.

In the paper, the authors analyse the most simple form of the game, where there is only one move for both players. The strategy of the first player, the defender, is always nonadaptive, whereas the strategy of the second player, the attacker, is possibly adaptive, up to the point where he gets information on the last move of the defender. They model everything formally and apply game theory to analyse the strategies of both players. Formally, both players have a 'view', which encodes their knowledge of the history and includes all their previous moves and the feedback they received for these moves. Then, their strategy consists of a function that maps a view on the positive real numbers and tells a player how long they should wait until their next move. This strategy function is possibly randomized. Another important concept is the rate of play, which is the average number of moves the players perform as time goes on.

The simplest class of strategies arises when both players fix their strategy before the first move. Here the authors consider a so-called 'periodic strategy', where both players select a random phase p and a rate of play d and then make a move at time p + k*d for k = 0,1,2,... In this case, the authors find a Nash equilibrium, which is a choice of strategies for both players such that neither player stands to gain by changing his strategy if the strategy of the other player would remain unchanged.

They also consider 'renewal strategies' where players move according to a renewal process. Such a process 'renews' itself after every move, i.e., the time until the next move is independent from the previous time, but distributed identically. Here, the authors try to find 'strongly dominant' strategies. A strategy is strongly dominant if it nets the player a better score than all his other strategies would, given that the strategy of his opponent can be any one from some set of strategies. They show that when a player has a fixed rate of play a, the periodic strategy with period a strongly dominates all non-arithmetic renewal strategies. This result holds for both players and even transfers to the case where the players get some information on the rate of play of the other player before the game.

The analysis of FlipIt teaches some valuable lessons to the security community. First of all, systems should be designed under the assumption that they can be completely compromised at times. This includes the theft of cryptographic keys. "Many times, attackers cross the line of secrecy cryptographers assume in their protocol designs." Secondly, the defender can force the attacker out of the game in some cases, by playing very aggressively. This is especially interesting when the move cost for the defender is quite low. The authors mention the example of Virtual Machines that can easily be refreshed from clean images and whose data can be restored from a centralized server. Finally, the analysis shows that any extra amount of feedback will benefit the player during the game. Thus, defenders should monitor their systems frequently to gain insight into the strateegy of the attacker.

The work also raises many open questions, especially as the strategy classes get bigger. Most of them are of the form "If player A uses a strategy from class X, which strategy from class Y would be strongly dominant for player B?" The authors state that the most interesting open questions arise when one of the players uses an adaptive strategy, but that the most challenging instances are when both players can use adaptive strategies.