Friday, December 16, 2011

IMA Cryptography & Coding - Day 2

The invited talk on Day 2 of IMACC was given by Ivan Damgård on secure multiparty computation (MPC). The goal of MPC is to jointly compute some function on a number of players' secret inputs. Ivan began with a nice explanation of how to achieve this with information theoretic security. He described how MACs can be used to verify the computations of each player, thus achieving security against active adversaries.

The talk then went on to describe recent results concerning the case of a dishonest majority of adversaries. Clearly this scenario is very important to focus on if MPC schemes are to be deployed in the real world. Early efforts to obtain this level of security were very inefficient, but some recent innovations have proposed using a homomorphic encryption scheme to improve this. This was first suggested in Eurocrypt 2011 by Bendlin, Damgård, Orlandi and Zakarias (pdf). By dividing the computation into two phases ('offline' and 'online'), they manage to perform a secure multiplication in around 8ms. These results are improved upon in a recent paper by Damgård, Pastro, Smart and Zakarias (pdf), which reduces the complexity and storage requirements of the scheme even further. Multiparty computation is an active area of research in Bristol, and it was nice to see it represented in Oxford with Ivan's excellent talk.

One session that interested me in the afternoon was on cryptanalysis and security analysis. This began with Martin Albrecht presenting joint work with Kenny Paterson on breaking a recent IBE scheme of Chen et al. (pdf). The basic attack involved exploiting the relationship between the master secret key and the private keys for each identity that are generated from the master key. Specifically, each private key is given by a quadratic function in the master key components. So given enough of these keys, an attacker has a system of multivariate quadratic equations in the master key that can be solved using a Gröbner basis computation.
Whilst this attack was considered in the original paper, the authors severely underestimated the efficiency with which Gröbner bases can be computed. Apparently this problem is surprisingly common in the literature, and stems from using analysis of the XL algorithm to obtain security estimates, whereas in practice the F4 and F5 algorithms are far more efficient. Although in the original paper security of the scheme was proven to be based on the DHIES scheme, this attack does not allow you to break DHIES. The key flaw is that their proof assumes that a specific function can be modelled by a random oracle, when in fact the output of this function is dependent on the secret key. So the use of a random oracle here is completely inappropriate, and invalidates the security proof.

The third talk in the session, given by Julia Borghoff, followed in a similar vein, attacking the lightweight stream cipher A2U2. The attack also involved creating a system of multivariate quadratic equations and using Gröbner basis computations to solve this. Firstly an efficient chosen plaintext attack was presented, followed by a more complex 'guess and determine' attack requiring only a known plaintext. However, some doubts were raised afterwards as to how feasible this attack would be in practice due to the high number of Gröbner basis applications required.

In between these talks on cryptanalysis, our recent PhD graduate Steve Williams presented his work on the security analysis of the SSH key exchange protocol. Although the underlying application layer of SSH has been thoroughly analysed in the past, no-one had yet performed an analysis of the vital key exchange protocol used to initiate the shared secret. Since the initial messages transmitted in the protocol contain a signature, it is not possible to obtain indistinguishability of the key generated in this stage. So the security of this can only be shown to be one-way, which Steve successfully proves. A transformation is then applied to this shared key, and Steve went on to show that this transformation in fact means that the final keys generated in SSH are indistinguishable.

Thursday, December 15, 2011

IMA Coding and Crypto, Day 1 Morning

This year is the 25th anniversary of the conference colloquially known as Cirencester. It is traditionally held every two years at the agricultural college in Cirencester. It's a rather remote location and it has the effect that in the evening everybody cozies up in the college bar with big wooden fire. A few years back I remember I kept having to climb a big mould of whatever just in order to get telephone reception.

Anyway, this year's Cirencester is not actually in Cirencester, instead it is in Oxford, at the Lady Margaret Hall. This college used to be women only, but these days it is mixed (just like Royal Holloway, University of London). It is located a bit north of the center, about 10-15 mins walk and the buildings are all rather new (certainly for Oxford college standards). So it's not quite as remote as the real Cirencester, but it is a decent emulation (including some of the agricultural whiffs that occassionally manage to make it into the lecture theatre).

The first talk of the conference was by David Nacchache who gave an invited talk consisting of three parts. The first part dealt with code obfuscation. The goal of code obfuscation is simple: you take a piece of code and modify it in such a way that it behaves exactly the same as the original piece, yet all other information about the original code is hidden. For instance, a programmer might want to hide the control flow of a problem, or a cryptographer might want to hide a key that is a fixed input to the program. Obfuscation is a rather hard problem cryptographically, rife with negative results. The main negative result is due to Barak et al., who showed that there exist programs that cannot be obfuscated at all: any obfuscation will invariably leak a predicate about the original. In his talk, Naccache explained programs of a much higher degree of unobfuscatability: any (functionally equivalent) obfuscation will necessarily leak the original code. The main tool of this remarkable result is the concept of Quines, that is programs that print themselves. While this may sound rather negative, Naccache also pointed out a positive, namely their transformation takes any program and turns it into a fully unobfuscatable one, which creates a potential mechanism for detecting forgeries of some original piece of code.

The second part was dedicated to identifying finger prints. While there are many algorithms for this already, most boil down to comparing a fingerprint found on a crime scene against a database, checking for a match one by one. These matches are quite costly and Naccache showed how one can use relatively lightweight statistical methods to precompute an ordering on the database that increases the probability of finding a hit sooner rather than later (thus reducing overall search time).

Finally, Naccache discussed the problem of figuring out the computation that was performed based on inputs and outputs only. In the past work had been done related to the partial extraction of an exponent based on the projective representation of a scalar multiple, here the focus was on arithmetic circuits over a finite field of small depth. As it was still work in progress, the details are left for later.

After a very short break, the first contributed talk was delivered by our very own Peter Scholl. I think it was his first conference presentation and he did a sterling job! The topic under investigation was improved key generation for Gentry's fully homomorphic encryption scheme. When Gentry proposed his breakthrough scheme, the efficiency left much to be desired and in fact, it was completely impractical to deploy the scheme for a realistic security setting. Much progress has been made in the mean time, but this progress does not always blend well: Gentry and Halevi presented a vastly improved key generation scheme and Smart and Vercauteren presented several significant improvements to other parts of the scheme, yet the Gentry-Halevi key generation sadly does not apply to the parameters Smart and Vercauteren need for their key ideas. In this work, Peter (in joint work with Nigel) adapted the ideas of the Gentry and Halevi to the Smart-Vercauteren setting. It turned out that by introducing several neat new techniques and optimizations, key generation suddenly becomes feasible. I would especially like to draw attention to one of the conclusions, namely that you really need to implement these more complicated schemes to see where the challenges are and what can be done about the bottlenecks. This point was also brought home in later talks, for instance the excellent invited talk by Ivan Damgaard and the inspiring talk by Mike Scott.

After Peter, Frederik Armknecht (fresh from presenting at Asiacrypt) gave a talk on constructing homomorphic encryption schemes from coding theory. The advantages of using coding theory are obvious: the computations involved are almost guaranteed to be simple, plus linear codes naturally allow one homomorphism naturally. The hope was that this simplicity could be leveraged to create an efficient scheme supporting both additions and multiplications. As it turns out, this is not as easy as one might hope for. The schemes presented by Armknecht still had some limitations that would reduce their overall useability. Chief amongst these were fairly strict restrictions on the number of ciphertexts allowed (which also implied a secret-key only setting) and a limitation on the number of multiplications that could be handled. Nonetheless, one can imagine that in some scenarios these scheme deliver exactly what is needed in a relative cheap and painful way.

Tuesday, December 13, 2011

HB+ and its variants

Today's study group, led by Luke and Gareth, was on the topic of HB+ and variants thereof, with particular focus on the following three papers:
  • Efficient Authentication from Hard Learning Problems (Kiltz et al, Eurocrypt 2011) [pdf]
  • Parallel and Concurrent Security of the HB and HB+ Protocols (Katz et al, Eurocrypt 2006) [pdf]
  • HB#: Increasing the Security and Efficiency of HB+ (Gilbert et al, Eurocrypt 2008) [pdf]

The motivating problem for HB-related schemes is secure authentication for low-cost devices such as RFID tags, whose power and storage constraints necessitate lightweight protocols. The motivation for the original HB protocol (Hopper and Blum, Asiacrypt 2001) [pdf] was actually secure human-executable identification, performed using only such undemanding computations as dot products and XORs -- so, although their proposal does not fulfill all the desired security requirements, it is nevertheless an excellent starting point for developing (very) low complexity schemes.

HB authentication derives its security from the NP-hard 'Learning Parity with Noise' (LPN) problem, which corresponds to the task of decoding random linear codes:
  • Let A be a (q × k) binary matrix; x a random k-bit vector; η ∈ (0,½) a (fixed) noise parameter; ν a random q-bit vector such that HW(ν) ≤ ηq. Given A, η, z = (Ax)⊕ν the LPN problem is to find a k-bit vector x's.t. HW((Ax')⊕z) ≤ ηq.

The basic HB protocol is as follows:
  1. Tag and reader share a key x ∈ {0,1}n.
  2. For rounds j = 1,...r:
    1. Reader draws ajR {0,1}n and sends to tag
    2. Tag computes uj = aj · x and draws εjη {0,1}
    3. Tag sends zj = uj ⊕ εj to reader
    4. If zj = aj · x for a clear majority of j ∈ {1,...,r} then the reader accepts the tag as authentic.

This is clearly attractive as the only computations performed on the tag are bit-wise computable dot-products and single bit XORs. However, it turns out to be vulnerable to an active attack in which the adversary retrieves x bit-by-bit by sending challenges of the form ajk = 1, aji = 0, ik, that is: (0,0,...,1,...0,0). Then zj = xk ⊕ εj so that the majority over r rounds will reveal xk.

To resist the above attack Juels and Weis (Crypto 2005) [pdf] proposed HB+:
  1. Tag and reader share two keys x, y ∈ {0,1}n.
  2. For rounds j = 1,...r:
    1. Tag draws bjR {0,1}n and sends to reader
    2. Reader draws ajR {0,1}n and sends to tag
    3. Tag computes uj = (aj · x) ⊕ (bj · y) and draws εjη {0,1}
    4. Tag sends zj = uj ⊕ εj to reader
    5. If zj = (aj · x) ⊕ (bj · y) for a clear majority of j ∈ {1,...,r} then the reader accepts the tag as authentic.

This is provably secure in the 'detection-based' adversarial model, and therefore no longer vulnerable to the active attack described above. However, several problems remain:
  • The revised protocol introduces the problem of generating random vectors on the tag.
  • The proof supposes that the rounds are performed sequentially and does not extend to a parallelised implementation (desirable as a strategy to reduce communication complexity).
  • The false rejection rates (under the proposed acceptance thresholds) are very high -- as much as 44% for 80 rounds with a noise level of ¼. (See table 1 of (Gilbert et al, Eurocrypt 2008)).
  • The adequacy of the adversarial model (which supposes the adversary can interact only with the tag before impersonating it to the reader) has been questioned.

In fact, HB+ is vulnerable to a man-in-the-middle attack in which the adversary is able to interact with the reader as well as the tag before attempting to impersonate (Gilbert et al, 2005) [pdf]. We discussed the fact that such an attack poses a material threat: whilst it would be difficult to carry out an opportunistic MIM on a nearby tag carried by a passer-by, an attacker in possession of a tag with incentive to clone it could very well use a reader of his own to do so, with comparatively unconstrained time and resources.

The same authors solve this problem and also answer to the parallelisation challenge with RANDOM-HB#, in which the secrets become (nX × m) and (nY × m) binary vectors X and Y rather than nX- and nY-bit vectors x and y, and the protocol operates (in one go) in matrix form rather than round-by-round, so that the final verification consists of the comparison of two m-bit vectors a · X ⊕ b · Y and z. This adaptation is secure against the above MIM (known as GRS-MIM after the authors), as well as having a much smaller false rejection rate. However, when the adversary is afforded more powers (i.e. he is allowed to modify more of the elements of the protocol) another MIM attack has been found -- though with increased complexity.

The last paper we looked at was concerned with constructing MACs based on the hardness of LPN (Kiltz et al, Eurocrypt 2011). The protocol can be informally described as follows:
  1. Tag and reader share a key x ∈ {0,1}n.
  2. Reader selects a random subset of bits of x and sends to the tag.
  3. Tag computes a noisy inner product z of the selected bits and sends to the reader.
  4. Reader verifies that z matches the inner product (without noise) 'most of the time'.

This acheives 2-round authentication with active security. The proof derives from the hardness of the subspace LPN problem (Pietrzak 2010 [pdf]) and because it doesn't use rewinding technqiues it remains valid against quantum adversaries.

Single-sample DPA (aka horizontal DPA)

This week's study group covered 'old' papers by Colin Walter (big-mac attack) to current work of Mike where focus is to apply DPA techniques to a single trace. More specifically, we discussed the following papers:

  • Colin D. Walter “Sliding Windows Succumbs to Big Mac Attack” [pdf]
  • Colin D. Walter “Longer Keys May Facilitate Side Channel Attacks” [pdf]
  • Jean-Christophe Courrège et al. “Simple Power Analysis on Exponentiation Revisited” [pdf]
  • Werner Schindler and Kouichi Itoh “Exponent Blinding Does Not Always Lift (Partial) Spa Resistance to Higher-Level Security” [pdf]

Stefan kicked off the study group by explaining us why longer keys may facilitate Side Channels Attacks (SCAs). Common sense would suggest that longer keys should be used to provide more security. This is true for the mathematical strength of a cipher, however, Colin Walter shows in his work that for embedded RSA crypto-systems the increase in leaked data through longer keys in fact lowers security. We discussed two types of attacks, namely a timing and a power analysis attack, that may be possible from increasing the key length. The implementation under attack uses a variant of the binary "square-and-multiply" algorithm to compute the RSA exponentiation. As so often when trying to attack such implementations, the goal is to distinguish between squares and multiplications to extract secret key material.

One way to distinguish these two core operations is to look at timing variations that occur for computing the modular products. Typically, a form of Montgomery's modular multiplication (MMM) is used to compute these products. From a side-channel perspective, the interesting aspect of MMM is the fact that it includes a final conditional subtraction which reduces the output to less than the modulus. This final subtraction occurs with higher probability for squares than for multiplications and this forms a timing side-channel that can be exploited by an attacker that observes a number of exponentiations. The data leaked through this side-channel is proportional to the square of the key length and thus longer keys are being less safe. The simplest countermeasure to thwart this kind of attack is to use an unconditional subtraction and select the appropriate result afterwards.

The second attack that Stefan described exploits the power leakage of the internal hardware multiplier. For MMM a number of single precision multiplications need to be computed. The idea is here to fix one of the multiplier inputs while the second input is more or less random and then average the power traces over a number of multiplications. The power trace that we end up with is then dependent on the Hamming weight of the fixed operand. If we apply this technique to all chunks of a long integer multiplication we can build templates during pre-computation which we can then use during the actual exponentiation to guess which operation has been computed. The nice property of this attack is that we only need to collect a single power trace of an exponentiation to characterise the different operations plus mount the actual attack.

In the second part, Mike was talking a bit about the stuff he is doing at the moment. For example, on an ARM chip it is apparently easy to identify the occurrence of the different single precision multiplication operations in the power trace while executing an exponentiation. Using a horizontal DPA, the idea is to recover in each iteration the i-th operation (whether a square or multiplication has been executed) under the assumption that we have successfully recovered the first i − 1 operations. Next, an attacker makes a prediction (e.g., using the Hamming weight) of how the power consumption should look like for the two core operations. Depending on which of these two power profiles correlates best with the actual power trace reveals another operation of the exponentiation. Collecting one power trace of the device under attack is enough to mount such attack.

Mike also described another attack on the exponentiation that exploits some bias in the Hamming weight of the output of single precision multiplication and squaring operations. Basically, what is exploited in this attack is the fact that the second most least significant bit of a single precision squaring operations will always be zero. With this information we can build templates by characterising these squaring and multiplication operations to try to make a distinction between the operations. Mike reports that he had a 95% success rate in choosing the right operation using this technique. The advantages of this attack is that the bias is caused by the operation itself and it will remain there irrespective of the message input.

Finally, Mike briefly discussed some techniques to attack implementations that use exponent blinding as countermeasure.

Thursday, December 8, 2011

Dagstuhl: Cloud Computing

This week I have been at Schloss Dagstuhl as a co-organizer of a workshop on security for cloud computing. Dagstuhl is a computer science research centre in Germany, and every week they invite a bunch of computer scientists to the old Schloss to discuss a particular research topic.

We discussed various topics. From securing data on the cloud, to computing on this data, to verifying the computation and more interestingly we discussed the economics of whether it actually made any sense. In this last topic there were two interesting talks by Ari Juels and Radu Sion. Both essentially contended that economically it may make no sense to perform computation on encrypted data in the cloud. Indeed Radu went even further and suggested that the so-called economic benefits of general cloud computing may be illusory, especially if the amount of data transfer to and from the cloud outweighs the benefit of the outsourced computation. This was done by measuring the cost of various operations in terms of pico-cents.

A related question, debately quite hotly, was whether any of the ANY cryptographic method for computing on encrypted data could have widespread deployment/usage. As usual the "Danish Sugar Beet" auction was mentioned as the application for Multi-Party Computation technology; but this is a niche application and none of us could come up with a commercially compelling large scale application of such techniques. In all cases there seemed to be impediments to deployment, often of a non-technical nature; such as in whose interest would it be to commercially make such systems available?

Dagstuhl is a great place to visit. If you ever get an invite you need to go. Organization is a bit chaotic; but that is the charm. We made the programme up as we went along, and so could respond to the interesting topics that were arising. However, it is probably best to go not in the bleak mid-winter.

Asiacrypt 2011 Day 4

One talk I was interested in today was the talk on the paper "Short signatures from weaker assumptions". In the paper, the authors first propose several new constructions of (m,1)-programmable hash functions (PHFs) for any m≧1. They then show how to use the new (m,1)-PHFs for generic construction of short yet efficient hash-and-sign signatures based on weaker hardness assumptions: the q-DH and RSA assumptions (Note before this work , there are some existing PHFs short signatures based on stronger assumptions: strong q-DH and strong RSA assumption). The resulting signature schemes from weak assumptions are secure in the standard model.

The concrete q-DH signature schemes are the first hash-and-sign schemes from the q-DH assumption and the proposed RSA signature schemes have considerable efficiency improvement compared to the previous standard model RSA-based signatures. Interestingly, the resulting signature schemes offer different tradeoffs between signature size, efficiency and public-key size. The bigger the parameter m in the (m,1)-PHF, the larger the public-key size, the shorter the signature size. Therefore, to obtain extremely short signatures, the size of public-key can get quite large.

Wednesday, December 7, 2011

Asiacrypt 2011 Day 3

There were two talks discussing secret sharing at Asiacrypt day 3: one talk on the paper "Computational verfiable secret sharing revisited" and the other talk on the paper "Natural generalization of threshold secret sharing".

In the former paper, the authors demonstrate that homomorphism of comments is not a necessary for computational verifiable secret sharing (computational VSS) in the synchronous or in the asynchronous communication model. The author consider computational VSS as a standalone primitive and propose new VSS schemes based on the definitional properties of commitments that are almost as good as the existing VSS schemes based on homomorphic commitments.

In the later paper, the authors study the research for new families of ideal access structures that are among the most generalization of threshold secret sharing. The authors also study the efficiency analysis of the methods to construct ideal secret sharing schemes. By using integer polymatroids, the authors propose a number of new families of ideal multipartite access structures.


Tuesday, December 6, 2011

ASIACRYPT

The first two days of Asiacrypt 2011 were really great. Met lot of people and discussed various topics, other than my own research area. Yesterday, there were two talks that I liked very much. One of them was on Deniable encryption, about which Ming has already blogged. The other talk was in the same session and on Lossy encryption and selective opening attack (SOA). The motivation for SOA comes from computationally secure multiparty computation (MPC) tolerating adaptive adversary. In such protocols, the adversary can see all the messages in the protocol (the messages exchanged between the honest parties are secure as they are encrypted) and over the course of time, can decide which parties to corrupt (this strategy is called adaptive corruption). Ideally, if the messages of the individual parties are completely independent and the underlying encryption scheme is IND-CPA secure then the security of the MPC protocol is preserved. However, in a typical MPC protocol, the messages exchanged between the parties are not completely independent. In that case, the big question is the following:

suppose the adversary "selectively" get the message and the corresponding randomness used to encrypt the messages of some of the parties in the protocol. Then does that preserve the messages and randomness of the remaining honest parties (assuming that the messages and the randomness of the honest parties need not be independent)?

If an encryption scheme satisfies this property then it is said to be secure against "selective opening attack" (SOA) and using such encryption schemes, we can design computationally secure MPC protocols against adaptive adversary.

The speaker (Benoit Libert) showed how to design lossy encryption scheme secure against SOA. Now let me briefly discuss what is lossy encryption scheme. It can be viewed as a generalization of "dual mode encryption scheme", where there are two modes of encryption: a typical injective or normal mode, where there is a one-to-one correspondence (injective mapping) between plain text and cipher text, while the second mode is a "lossy" mode, where there is no such association. This mean that the cipher text will be independent of the message.

The speaker briefly outlined in his talk, how to design lossy encryption scheme, secure against SOA, from general assumptions, such as re-randomizable encryption, homomorphic encryption and oblivious transfer (OT). I really enjoyed the talk and I really found the concept of lossy encryption and SOA very interesting. Actually, just few days back, I was asking Nigel whether it is possible for anyone to efficiently compute
a message m' and corresponding randomness r', which results in a given cipher text C, which actually is also an encryption of some different message m, under a different randomness r. That is, given C = Enc(m, r), m and r, can one efficiently find another m' and r', such that C = Enc(m', r') = Enc(m, r)? And he said that in general its not possible to do so efficiently, unless the encryption scheme is non-committing. I was not aware of this notion of non-committing encryption and after hearing it, I started to read about it. And then when I listened to yesterday's talk about lossy encryption, then it further widened my interest in this topic. I think it will be a good topic to be discussed in the study group.

Asiacrypt 2011 Day 2

The first talk at Asiacrypt day 2 is the talk on "Bridging brocast encryption and group key agreement". The work bridges two well-studied primitives, broadcast encryption and group key agreement, with a new hybrid primitive referred to as contributory brocadast encryption (CBE). In CBE, a group of members contribute to the public group encryption key. A sender can securely broadcast to any subset of the group members which is chosen in an ad hoc way. CBE incorporates the ideas of broadcast encryption and group key agreement. In the set-up phase, a group of members interact via open networks to negotiate a common encryption key while each member keeps a different secret decryption key. Anyone can necrypt message to any subset of the group members by using the common encryption key and only the intended receiver can decrypt the ciphertext. CBE allows the sender to exclude some group members from reading the ciphertext (compared to group key agreement) and does not need a fully trusted third party to set up the system (compared to broadcast encryption). Finally, a CBE construction proven to be semi-adaptively secure under decision BDHE assumption in the standard model is also proposed.

Monday, December 5, 2011

Asiacrypt 2011 Day 1

One of the interesting talks today is a talk given by Peter Sebastian Nordholt on "Lower and upper bounds for deniable public-key encryption". A deniable cryptosystem is introduced to allow sender or receiver to deny a message exchange and combat coercion. For example, if the adverary can threaten the communicating particies into revealing the internal states or parameters after the execution of the communication, the cryptosystem is still secure under this kind of coercion. According which parties can be coerced by the adversary, we can distinguish between three kinds of deniability: sender deniability, receiver deniability and bi-deniability. A deniability public-key encryption is a public-key encryption which is deniable.The main contribution of this paper is to derive upper and lower bounds on how secure a deniability public-key encryption scheme can be as a function of the key size.

For the lower bounds, the author have the following results:

  1. Receiver deniable: a public encryption with l-bit keys can be at most (1/2)* ((l+1)^(-1)) -receiver deniable

  2. Sender deniable: the author do't know a non-trivel lower bound

  3. Bi-deniable: at most (1/2)*((l+1)^(-1)) -bi-deniable.

For the upper bounds, the author have the following results:

  1. Receiver deniable: let k be the length of the secret key, there exist a (1/n)-sender-deniable puvlic-key encryption scheme with key length l=O((n^2)* k).

  2. Sender deniable: let k be the length of the secret key, there exist a (1/n)-sender-deniable puvlic-key encryption scheme with key length l=O(n* k).

  3. Bi-deniable: let k be the length of the secret key, there exist a (1/n)-sender-deniable puvlic-key encryption scheme with key length l=O((n^4 )* k).

Thursday, December 1, 2011

ICISC 2011: Day 1

Yesterday was Day 1 of ICISC 2011. This year the conference is making its 14th appearance and this time the conference received the maximum number of submission in its history (total 128 submissions) and the acceptance ratio is 25.2%. The emphasis on Day 1 was mostly on practical crypto: one session on side channel, one on network security and one invited talk on light weight crypto. In addition to this, there was one session on hash functions and one on public key cryptography. Apparently I was requested to chair the session on hash functions. This was the first time when I chaired any session and interestingly, I was given 100$ for the same:)-

I could not make much out of yesterday's talks, as most of them were related to the topics that are not my cup of tea. However, I did like the invited talk on light weight cryptography by Thomas Peyrin. Actually this is the first time I listened to any talk on this topic. The speaker very nicely introduced what light weight cryptography means and what are the major goals and challenges. What I could understand from his talk is that the major goal is to design light weight crypto primitives, specifically light weight block ciphers and light weight hash functions, which provide us with appropriate level of security and speed and at the same time not utilizing much resources (in this context, it is the space) so as to be used in applications like RFID. Various guidelines for the design of these primitives, meeting the above requirements were also discussed. The speaker also talked about one of his latest light weight hash function and one light weight block cipher, which were published in CHES 2011 and CRYPTO 2011. However, I could not make much out of them as it became too technical for me. Moreover, I was having severe jet lag (Seoul is around 9hrs ahead of Bristol time) and even now I am finding it difficult to adjust to the local time.

Overall, the first day experience at ICISC 2011 was OK. But I am looking forward specifically for the last day talks, as there are several talks related to protocols and theoretical cryptography and I am hopeful that they will be interesting.