[Cryptech Tech] Hardware entropy
Bernd Paysan
bernd at net2o.de
Sat May 17 12:33:02 UTC 2014
Am Samstag, 17. Mai 2014, 08:27:32 schrieb Joachim Strömbergson:
> Yes external. PN avalanche noise sources are non controversial, are used
> in a lot of security and embedded settings and will therefore be our
> base entropy source.
Yes, disadvantage for external circuits: dead easy to tamper (replace by a
pattern generator chip that doesn't produce entropy, but passes the health
check).
> Do you think that access to raw entropy should always be possible - even
> raw entropy used for mixing and SCRPNG to generate production values?
Yes, at least partly. You should be able to monitor the entropy quality with
external programs, which can do better than the internal health check, and you
should provide raw entropy to seed other PRNGs. The "no" part is that fully
monitoring the complete entropy input allows to clone the state of the SCPRNG,
and that would be a bad thing. Suggestion: You can monitor one entropy source
in total, but all the other entropy sources go in unmonitored.
> There has been discussions in allowing injection of entropy from a
> separate source. I can accept that if it is treated as just an entropy
> source that is mixed with other source. I really am vary of RNGs that
> allow mixing just pre of the RNG or at the random number output stage.
The injected entropy is potentially an attacker, so you have to make sure it
can't do harm. It can be just another entropy source, but it should not be
possible to gain control over the entire entropy pool.
> > ... SRAM as entropy source also works for embedded controllers: You
> > can gather the entropy of the SRAM at startup, and then run a DRNG
> > with enough internal state for the rest of the operation (make sure
> > it is a history-keeping DRNG, see below).
>
> ;-) You should check the @tech archives - we had a some lively
> discussions on this a few months back. I really think using SRAM as
> entropy source is an interesting idea, few seems to agree though there
> are some papers that shows some promising results.
>
> The solution I want to test is to have a small, external SRAM outside
> the FPGA. Using a MOSFET switch toggle the power to sram and in between
> power off read out the contents of the memory. The contents is then
> hashed to get a value out. Due to bias there will be memory cells that
> always end up in the same state, but hopefully enough of them will not.
The problem with switching the power is that you have to wait for quite some
time until the cells toggle. Depends on leakage, of course, but if you use a
small SRAM, it usually is a ultra-low-leakage process (small SRAMs are
optimized for power consumption), and that means you have to wait for ~15
seconds before you restart it. The smaller the geometry, the faster the decay
of course (less charge on the gates, higher leakage).
The modified SRAM I suggest is only viable for an ASIC, but it will put the
SRAM cells into metastable state right on the spot (i.e. nanoseconds instead
of seconds).
> The good points with this entropy source is that it is cheap, easy to
> build, have a simple, digital inrerface and should be possible to toggle
> fairly frequently yielding good capacity. Also, as long as the interface
> is protected against probing and/or injection, it is fairly robust
> against manipulation.
Yes. That is the actual problem with external SRAMs: The interface is
difficult to protect. Same problem as with the external diode. There is a
reason why secure chips are ASICs, not FPGAs. The internal source therefore
is very important, because it is the only one that is difficult to get under
control by an attacker.
> A variant of this is to actually write random patterns into the memory
> before powering it off. Basically have fairly good PRNG (SipHash for
> example) that is seeded with some of the values generated from the main
> CSPRNG generate data written into the memory.
That might be a good idea during operation, but for debugging, that just
obfuscates potential problems with the SRAM. Therefore, I don't think you
should do that. Fill with a known pattern, and check for the toggled bits is
more healthy.
> > All the Merkle Damgard constructs have known structural weaknesses;
> > that was the reason for starting the SHA-3 competition... so they are
> > all broken, at least in theory. There's no practiacal attack on
> > SHA-512, though, but SHA-3 doesn't even have the theoretical
> > weakness. The comfort level with "old and proven to have a
> > structural weakness" should be lower...
>
> We are probably reading the results from the SHA-3 compo a little bit
> different. I followed the compo quite closely (in fact I actually
> supplied one of the papers that tested implementations of Keccak in
> different FPGA devices as part of the compo).
>
> If I understand the big guys correctly, the worries that triggered the
> compo has been dampened during the compo. And based on the knowledge we
> have today, the trust in SHA-256 and esp SHA-512 is higher today than
> before the compo. Several of the finalists, Blake and Keccak seems
> really good. The way NIST have handled Keccak after the completion of
> the compo less so.
Hm, the NIST fnord after completion is something that significantly reduced my
trust in Bruce Schneier. There was this leaked presentation where they were
talking about security level (128 and 256 bits), and even Bruce Schneier
confused it with hash value length (which needs to be twice, due to
colission).
This happened just right after Bullrun was disclosed, so I suppose people were
ultra-nervous, and got things wrong. The Keccak authors had to straighten
this stuff, and even Bruce Schneier had to correct himself. But apparrently
nobody did read this clarifications. And that one makes me worry if I should
trust people who only read the headlines, and don't read the follow-ups and
corrections the days later.
What the NIST did propose (and what does affect security) was to half the
residual state size, which does reduce the security of preimage attacks to the
same amount as collision attacks under the Keccak security proof assumptions
(see last paragraph of this reply).
If the NSA leaked this presentation on purpose to ruin SHA-3's reputation,
they had a big success. And as this was an internal NIST presentation, they
are the most likely source of this leak. Keccak is a significant advance on
hashes and encryption (it is a universal primitive), ruining its reputation is
a good thing for the NSA.
About the strength of SHA-256 and SHA-512: The structural weakness of Merkle-
Damgard didn't go away. However, there was no progress in crafting an attack
against them. That doesn't mean anything. Two things speak against using
them: that's the way NSA breaks MD5, and the way they break RC4. They
disclosed their ability how to create preimage attacks on MD5 (which nobody
else knew at that time) in one of these NSA-worms, and they break RC4 in
realtime without big server farms (that's the semi-official statement).
Compare this to the RC4 attacks from Bernstein et al, these attacks are
impractical.
This sort of attack power discrepancy has to be kept in mind: If we don't
know how to break SHA-256, but know a theoretical weakness, we should stop
using it right now. We had that situation with RC4 for years, and didn't stop
using it.
To put it straight: If you apply the proof of Keccak's security to any Merkle
Damgard construct, the security level is 0 bits. Anybody who's fuzzing about
NIST trying to reduce the residual size for SHA3-512 to 512 bits (security
level 256 bits, i.e. "requires a deliberately constructet parallel universe to
be broken") should not even considder SHA2-512 as an option, because under
this proof's assumptions, SHA2-512 has the security level of 0 bits.
> Yes and no. If you don't have good, unpredictable seed values after
> mixing then you are in dire straits anyway.
One thing you absolutely must do is to feed back the output of the mixer as
one entropy source. That way you aren't in dire straits if you had enough
entropy after startup.
> This mirrors the discussions
> Matthew Green, IanG have had and it basically boils down to that if one
> has a key stretcher (CSPRNG) like most modern RNGs in OS:s today do, as
> long as you trust that the seed is good enough, using a good stream
> cipher should generate good, unpredictable random numbers.
Yes, but there are ways you can still be fine even if you can't trust the
seeds for long (only for long enough to generate initial entropy or a device-
specific secret fingerprint).
> What DJB has been argumenting is that if the stream cipher is trusted
> you really never need to reseed either and in fact might be
> counterproductive.
Yes.
> For me it is also important to try and innovate within the bounds of
> what a lot of people think are reasonable and good, mature solutions.
> Not stay to far from common practice.
The common practice is from a NIST standard that is written by the NSA. Stop
using it (or at least read it through with the assumption that the NSA did put
in more than just Dual EC DRBG). The separation of conditioner and stretcher
looks like it has been deliberately made to obfuscate weak entropy sources.
You can make this approach healthy by feeding back the mixer output into the
mixer (that's what Linux does), or by combining the two parts and using a
sponge function for mixing and stretching.
> That is why I want to have multiple sources, having a two stage DRBG
> with mixer/conditioner followed by stream cipher based key stretcher
> that generates the values. Using ChaCha/XChaCha is a bit controversial
> and AES-CTR has been suggeested. I think there are enough trust in
> Salsa20, ChaCha to use it. And what I like with ChaCha is that I can
> turn the knob, increase the number of rounds to get better security and
> still get good performance even with much more rounds that even DJB
> thinks is conservative.
Also, it is fairly trivial to modify ChaCha so that it becomes a sponge
function with a residual capacity (by taking only half of the output and
copying the other half into the input). Remember: the security proof of
Keccak's sponge function makes an (unrealistic) assumption which renders the
security level of all these other approaches to exactly 0 bits (they have no
unpredictable changing internal state - all states change either predictable
or not at all). And halfing the number of these bits (from 1024 to 512) by a
NIST internal slide caused people to go crazy. Or maybe that wasn't even
realized, as people already went crazy by being reminded that a 256 bit hash
has only 128 bits of security, and apparently don't understand the value of
these internal state.
After the SHA-3 competition started, I looked at the submissions, and said "oh
my god, despite Merkle Damgard is broken, most of them are just MD-variants!"
and decided to make my own hash function as fallback (with residual state and
sponge function design). I was surprised that the SHA-3 winner then was
Keccak, because I had thought they were too conservative to choose that
design. But in this case, being conservative is simply wrong. Merkle Damgard
is broken by design, you either have to use it in wide-pipe mode (which
creates the necessary residual capacity), or switch to another approach.
> Regarding FPGA internal entropy sources (did you read the report?)
Yes. I'm not convinced that this is working well on all FPGAs; free-running
ring oscillators that don't stop are more likely to work everywhere (you'll
have to change the readout frequency depending on jitter; unfortunately, the
equation is that with half the jitter, you have to wait 4 times as long).
> I've decided to do a simple test run. Something to do in a weekend. ;-)
--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://bernd-paysan.de/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: <https://lists.cryptech.is/archives/tech/attachments/20140517/669b1502/attachment.sig>
More information about the Tech
mailing list