[Cryptech Tech] fyi: papers from COST-IACR School on Randomness in Cryptography
=JeffH
Jeff.Hodges at KingsMountain.com
Sat Dec 3 00:05:01 UTC 2016
of possible interest...
Date: Sat, 26 Nov 2016 15:42:04 +0000
From: <dj at deadhat.com>
To: Ron Garret <ron at flownet.com>
CC: "Salz, Rich" <rsalz at akamai.com>, <dj at deadhat.com>, Cryptography
<cryptography at metzdowd.com>, Carl Ellison <cme at acm.org>
Subject: Re: [Cryptography] Is Ron right on randomness
>> No, the above #2 is not accurate. It does matter how good your
>> entropy source is. The leftover hash lemma gives you the
>> expression for the amount of entropy you can extract from entropy
>> sources - but doesn't tell you how and for the real constructions
>> the answer is worse. Subsequent papers given bounds for certain
>> specific extractors. This can be be summarized as the 0.5 limit. If
>> your input data has less that 0.5 bits of entropy per bit of data,
>> your extractor is swimming upstream slower than the stream is
>> moving downstream.
>
> Reference please? Because this would be news to me (and, I think, a
> lot of other people as well).
This is extractor theory which I'm by no means an expert in, but I did
just come back from the COST-IACR School on Randomness in Cryptography
(https://eventum.upf.edu/5484/speakers/cost-iacr-school-on-randomness-in-cryptography.html),
which was a week long set of lectures mostly on the current state of
extractor theory from the professors who developed it so I'm feeling
pretty clued in right now - that will pass.
I've posted these before on this list, but the previous discussions
suggest they are not widely read. In particular the
[a]
Start at the start, you can't encrypt one bit from in the presence of a
weak source.
https://www.cs.nyu.edu/courses/spring06/G22.3220-001/pinkas.pdf
So you're going to have to do something else.
[b] But you can still do crypto:
https://people.csail.mit.edu/ronitt/COURSE/S08/drafts/22.pdf
[c]
The utility of your extracted random numbers depend on what you plan to
do with them:
https://www.cs.nyu.edu/~dodis/ps/weak.pdf
[d]
Bounds for CBC-MAC and HMAC. AKA the 0.5 Limit.
But those bounds are in terms of computational bounds on the adversary,
not min entropy:
https://www.cs.nyu.edu/~dodis/ps/hmac.pdf
[e]
Last week's randomness school had a strong focus on developing chain
rules for conditional entropy under various entropy models (Min,
Shannon, Guessing, Collision, HILL, metric, metric*) but since the
authors of these paper were lecturing, I think I actually understood it.
There are a lot of papers to read, but this survey paper is a good place
to start:
https://www.cs.bu.edu/~reyzin/papers/entropy-survey.pdf
[f]
Multiple input extractors help you get past the 0.5 limit:
https://www.cs.nyu.edu/~dodis/ps/2-ext.pdf
http://www.boazbarak.org/Papers/msamples.pdf
[f]
Extractors have different bounds when you consider the entropy against
computationally bounded adversaries (that's us!) the general case being
HILL entropy.
https://eprint.iacr.org/2012/466.pdf
Is there a chain rule from which you can derive the entropy? It's an
open question, but this paper has counterexamples under some simple
assumptions, which is a no vote:
https://eprint.iacr.org/2014/678.pdf
There were other cryptographers present who were not so sure.
There is a lot lot more, but these are good starting points.
DJ
_______________________________________________
The cryptography mailing list
cryptography at metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography
More information about the Tech
mailing list