Wednesday, March 29, 2017

PKC 2017: Kenny Paterson accepting bets on breaking TLS 1.3

The member of the TLS 1.3 working group is willing to bet for a beer that the 0-RTT handshake of TLS 1.3 will get broken in the first two years.

In his invited talk, Kenny managed to fill a whole hour on the history of SSL/TLS without even mentioning symmetric cryptography beyond keywords, thus staying within the topic of the conference. Despite all versions of SSL being broken to at least some degree, the standardised TLS became the most import security protocol on the Internet.

The core part of TLS is the handshake protocol, which establishes the choice of ciphers and the session key. Kenny highlighted the high complexity stemming from the many choices (e.g., using a dedicated key exchange protocol or not) and the possible interaction with other protocols in TLS. Together with further weaknesses of the specification, this created the space for the many attacks we have seen. On the upside, these attacks express an increased attention by academics, which comes together with an increased attention by the society as whole. Both have laid the ground for improvements in both the deployment and future versions of TLS. For example, the support of forward secrecy has increased from 12 percent to 86 according to SSL pulse.

Turning to concrete attacks, most important in the area of PKC is the Bleichenbacher attack published already at Crypto 1998 (a human born then would a considered a full adult at the conference venue now). Essentially, it exploits that RSA with the padding used in TLS is not CCA-secure, and it recovers the session key after roughly $2^{20}$ interactions with a server. Nevertheless, the TLS 1.0 specification published shortly after Bleichenbacher's publication incorporates the problematic padding (recommending mitigation measures), and later versions retain it for compatibility. The DROWN shows the danger of this by exploiting the fact that many servers still offer SSLv2 (about 8% of Alexa top 200k) and that it is common to use the same key for several protocol versions. An attacker can recover the session key of a TLS session by replaying a part of it in an SSLv2 session that uses the same key.

On a more positive note, Kenny presented the upcoming TLS 1.3, which is under development since 2014. It addresses a lot of the weaknesses of previous versions by involving academics from an early stage and doing away with a lot of the complexity (including reducing options and removing ciphers). It furthermore aims to decrease latency of the handshake by allowing the parties to send encrypted data as early as possible, reducing the round trip time to one in many cases. The goal of low latency has also led to the inclusion of QUIC, which provides zero round trip time, that is, the client can send data already in the first message when resuming a session. However, QUIC is not fully forward-secure and therefore confined to a separate API. Nevertheless, Kenny predicts that the sole availability will be too tempting for developers, hence the bet offered.

Concluding, he sees three major shifts in TLS this far: from RSA to elliptic-curve Diffie-Hellman, to Curve25519, and away from SHA-1 in certificates. A fourth shift might happen with the introduction of post-quantum algorithms such as Google's deployment of New Hope. Less optimistically, he expects that implementation vulnerabilities will continue to come up.

Update: An earlier version of this post mentioned the non-existing Curve255199 instead of Curve25519, and it attributed New Hope to Google.

Monday, March 27, 2017

Tools for proofs


Security proof  for even simple cryptographic systems are dangerous and ugly beasts. Luckily, they are only rarely seen: they are usually safely kept in the confines of ``future full-versions'' of papers, or only appear in cartoon-ish form, generically labelled as ... ``proof sketch". 


The following two quotes frame the problem in less metaphorical terms. 

``In our opinion, many proofs in cryptography have become essentially  unverifiable. Our field may be approaching a crisis of rigor".

                                              Bellare and Rogaway (cca 2004)




``Do we have a problem with cryptographic proofs? Yes, we

do [...] We generate more proofs than we carefully verify
(and as a consequence some of our published proofs are
incorrect)". 
       
                                                                     Halevi (cca 2005)


Solutions developed by cryptographers e.g. compositional reasoning and the game-hopping technique, help to structure proofs and reduce their complexity and therefore alleviate to some extent the pain of having to develop rigorous proofs. Yet, more often than not proofs are still sketchy and shady.

There is help that comes from the programming languages community which has a long experience with developing tools for proving that programs work correctly and...cryptographic systems are just programs. Recent progress,  e.g. automated verification of parts of TLS, fully verified security proofs of implementation masking schemes to defeat leakage, is impressive and exciting.  More work is under way. 

If you want to learn more about how can you get someone else to do the proofs for you or, more realistically, learn about what existent tools can currently do, what they could do in the future, and discuss what is needed and which way to go, then you should attend the 
Workshop on Models and Tools for Security Analysis and Proofs
 -- Paris, 29th of April; co-located with EuroSnP and Eurocrypt --

which the Bristol Crypto group helps organize. The workshop features as speakers some of the most prominent researchers that are contributing to this direction. You can register for the workshop HERE. Early registration ends March 31st!


But wait...there is more. If you want to explore this area beyond what a one-day workshop allows, then you should consider attending the




--
Nancy, France, July 10th - 13th --


See you all in Paris and/or Nancy!