Thursday, February 27, 2014

To be fair, go geometric

There has been many topics in TCC this year. Some  'deobfuscated' their complex meaning in nice presentations, others proudly 'leaked' some FHE on their way, inferences from coding theory dared to appear, and not surprisingly, fans of zero knowledge were feed appropiately . But for me, with no doubt, the best of all of them, with three dedicated sessions, no less (and more that ZK), were those talks on secure computation.

In this post I commit myself to one of them, which I think it was very well presented, and most importantly, it came with an ingenous idea. It was given by Gilad Asharov from Bar-Illan, and dealed with an important aspect of secure function evaluation, which is that of fairness. Informally, this property says that if one particular party is going to learn the output, then everyone should learn it. When a majority of the parties are honest, there exist  well known fair protocols, but the situation is different when a majority of the parties is behaving dishonestly, e.g., 2PC: Cleve showed that unbiased coin tossing cannot be achieved with fairness, and hence nor the XOR function can. Of course, in practice this is not really an issue, because the fact that you don't get to know the intended outcome, doesn't mean that you are unable to certify the  validity of whatever comes out from the protocol, just abort if you are not convinced. Nevertheless, the question of what functions can, and what functions cannot be computed fairly, still stands. It  seems that almost any functionality cannot be realized fairly. You reason like this: to implement it you are likely to design a protocol which is round-based, but by this mere fact you are introducing asymmetry among the knowledge that the parties have, because is hard to imagine how you can make all parties gain simultaneously the same information within each round.

This belief was changed in 2008, when it was shown that there exist some functions, (other than the easy ones, i.e., constant functions or those with  input(output) from(to) only one party), can be computed with fairness. But wait a moment, didn't we just try to justify that fairness is hard? How do they do it then? Well, let's stick to what Cleve told us: you cannot do fair XOR with dishonest majority, ok, but, can you achieve fairness in something which has embedded a XOR gate? Turns out you can, and the key to see this lies in observing that depending on the structure of the function we can make the simulation go through: it is possible if the simulator can choose the input sent to the trusted party. In particular they gave a concrete function which has fairness. Too few though.

The wit of this paper is that it identifies which functions are suitable for this simulation strategy. In more detail, the authors show that, provided oblivious transfer exists, any boolean function, taking two inputs, on domains of size l,m, respectively, is fair if its truth table defines a full-dimensional convex hull in R^m. This result also extends to the non binary case. Aditionally, they also show that boolean functions that are  sampled at random from those whose two inputs come from different domain have their own full-dimensional convex hull with overwhelming probability, and therefore, are fairly computable. There are many of such functions, including those for the task of set-membership, or secure evaluation of a private (boolean) function.

No comments:

Post a Comment