[Cryptech Core] Fwd: chongo background
Basil Dolmatov
dol at reedcat.net
Thu Jul 23 14:44:11 UTC 2015
Glad to see that my voice about all this crap proudly called "random sources" is not lonely one in the desert ;)
dol@ с iPad
> 23 июля 2015 г., в 15:49, Randy Bush <randy at psg.com> написал(а):
>
> From: Rich Salz <rsalz at akamai.com>
> To: Randy Bush <randy at psg.com>
> Subject: chongo background
> Date: Thu, 23 Jul 2015 12:07:25 +0000
>
> Here's some background on random numbers so you can see if you want an
> intro to him.
>
> --
> Senior Architect, Akamai Technologies
> IM: richsalz at jabber.at Twitter: RichSalz
>
>
> -----Original Message-----
> From: Landon Curt Noll (chongo) [mailto:chongo at cisco.com]
> Sent: Saturday, August 09, 2014 4:39 AM
> To: John Serafini; Steve Marquess; Matt Thoms
> Cc: Salz, Rich; Claudio DeSanti (cds)
> Subject: Re: QRNG
>
> Hello John and Matt and Steve,
>
> ... and Hi R$! Thanks for the introduction.
>
> These topics are certainly of interest, either with or without my Cisco
> hat. Let me put my Cisco hat both on and off, quantum mechanically, at
> the same time. Just don't measure my hat state. :-)
>
> ---
>
> A few general thoughts:
>
> We have devoted some reasonable computational resources to testing the
> cryptographic quality of the output of PRBGs and NDBGs. Some (but not
> enough) attention has been paid to testing entropy sources. Threat
> models where cryptography plays an important role often fail when you
> trace data/info back to its origin.
>
> In quite a few cases (keys, IVs, Nonce, etc.) one encounters a less than
> ideal generator, seed or source of entropy. And while an enormous
> amount of computational analysis might go into a hash function, block
> cypher or cryptographic mode, the amount of attention paid to PRBGs and
> NDBGs is embarrassingly trivial. This needs to be fixed.
>
> Lack of testing, lack of a threat model, APIs that don't report failure,
> lack of real-time QA, lack of sound entropy measurement, improper reuse
> of entropy, failure to respond to potential problems in design, ego
> getting the way of examining bug reports, lack of independent testing,
> lack of an entropy rate budget: there are just some of the challenges
> applications that depend on cryptography face today.
>
> Paying attention to warning signs of PRBGs and NDBGs is been a
> significant problem. While slight concerns about a theoretical attack
> on a hash function might doom use of that hash function, showing that
> the output of from PRBG is not cryptographically sound too often is met
> with indifference. For example:
>
> [[Disclaimer: Most of these examples were NOT tested with my Cisco hat on]]
>
> On RSA Dual_EC_DRBG:
>
> We found conditions under which it was not cryptographically sound
> ... years ago. Some debate if RSA allowed flaws to be introduced
> into Dual_EC_DRBG or if RSA was encouraged to keep those flaws in
> Dual_EC_DRBG. Of that debate we have no hard data. What we do know
> is that our conversation with RSA was about how we failed to show
> that Dual_EC_DRBG was cryptographically sound. RSA's response to us
> was "well don't use it them if you don't like it" and "it is NIST
> certified and it passes CC, you must have made a mistake". In
> hindsight we should have pressed the point.
>
> On recent FIPS certification and CC:
>
> We will say that to this day, someone playing the FIPS or Common
> Criteria card is a BIG red card in our book. :-) FIPS testing, as
> implemented by several testing labs a few year ago, actually forced
> the manufacturer to reduce the quality of a generator due to the
> improper reading of a test criteria. And the flaws we have found
> that passed CC suggest that even very high Common Criteria allows
> for fatally flawed services to be approved.
>
> While such certifications might be nice for marketing purposes, they
> should be IGNORED when it comes to testing the quality of the data
> produced.
>
> On a certain unnamed Swiss quantum random generator:
>
> We tested a so called "quantum random generator" from a Swiss
> company that claimed "quantum perfection". Our tests showed it
> wasn't. We chose to not consider it form products. They chose to
> ignore us because their perfection was "quantum guaranteed" (their
> words). [[To our knowledge, they are no longer a company. Perhaps
> that is a good thing]].
>
> On Yarrow and Fortuna:
>
> I would raise a BIG caution with regards to Yarrow in any of its
> forms. Yarrow has never been found to be cryptographically sound.
> And while it may be possible to fix it, those responsible for the
> various incantations of Yarrow have failed to incorporate any
> changes that allow it to pass. Yarrow was promoted for a very long
> time while they were aware of our results.
>
> Now a team is promoting Fortuna. We were asked by the BSD folks to
> evaluate Fortuna and raised a few concerns with the architecture.
> We have not yet evaluated a Fortuna implementation. We plan to.
> The BSD folks plan to pre and post process Fortuna to harden it
> against certain architectural concerns. We plan to test their
> results when they ready to provide us with data. While remain
> hopeful that Fortuna will prove better than Yarrow, we don't know if
> it will be sufficient to be cryptographically sound.
>
> On services in Linux:
>
> And we need not mention that Linux's /dev/random should not be used
> for cryptographic purposes. It is a long and sad history of flaws.
> They get credit for attempting to fix things, and demerits for the
> overall design.
>
> And it goes without saying that /dev/urandom is even worse and
> should not be used except perhaps to play a kids game. :-)
>
> On data produced by Microsoft .NET cryptographic module:
>
> Without violating the NDA: we can say that the "punch line" included
> a response from the development team that included "we did not
> intended for anyone to actually test the output". We hope they
> eventually stick our feedback where the sun does shine. :-)
>
> On Intel's RdRand:
>
> We plan to take a long hard look at Intel's RdRand. Intel failed to
> implement our request inspection points of their service. And from
> what we have seen under NDA, we remain in a "we shall just have see
> about that" frame of mind. Intel chose to not implement what was
> needed for anyone to perform white-box real-time inspection for
> RdRand. Part of that failure may have been due to a management
> change during the design time. We hope for the best and plan to
> test for the worst.
>
> Yes, we have found services that appear to be both cryptographically
> sound and fit within a given threat model. Just not so in the general
> case, I'm sorry to say.
>
> ---
>
> Random number generation is hard. Hardware random number generation is
> hard and tricky. Maintaining a good entropy source is tricky. Real
> time testing of entropy sources is tricky. Converting entropic data
> into cryptographically sound data requires care. However for most
> threat models where cryptography plays an important role, ALL of these
> are vital to the model.
>
> Depending on your threat model, some PRBGs have been shown to be
> surprisingly good. And while their small seed size dooms them for most
> cryptographic use, they do show that complexity does not always
> correlate well with soundness.
>
> ---
>
> One area where PRBGs, NDBGs, and entropy sources fail is in the API.
> The lack of the ability to return an error condition is too often at
> fatal API flaw that dooms an implementation. Doing something reasonable
> when these services are able to return a failure is too often lacking in
> applications that depend on PRBGs, NDBGs, and entropy sources. Too many
> APIs have no way to indicate the data returned is garbage. Too many
> apps blindly use the output without checking for errors even when the
> API offers such conditions.
>
> We recommend a great deal of attention be paid to ensuring that
> application code can make use of error conditions and fail properly
> (without letting the attacker understand that they, for example, may
> have been able to compromise the integrity of the source). We recommend
> that considerable attention be paid to QA of the API. We recommend that
> teams not involved in the implementation develop and test the
> application's reaction to API errors.
>
> ---
>
> Another area we recommend attention be paid is to not allow the ego to
> get in the way of improvement. In a number of above-mentioned cases, an
> ego or egos reacted poorly to reports that were privately given to them.
> Denial and defensive reaction appeared to get in the way of improving or
> fixing a flaw.
>
> The BSD folks were probably the least ego bound to their design. Some
> of them seemed to be very interested fixing and improving their code.
>
> For our part, we tended error on the side of being nice. For example,
> the bad reaction by RSA to our Dual_EC_DRBG report caused us to simply
> do as they suggested: not use their code. In hindsight, we should have
> not gone away from pressing the issue given how widely used BSAFE was at
> the time.
>
> ---
>
> We think a group of independent people should devote resources to
> continual testing and examination of PRBGs, NDBGs, and entropy sources.
> We do it as a sideline part-time service for internal use.
>
> A CERT-like service needs to be a clearinghouse for those willing to
> test. Developers of PRBGs, NDBGs, and entropy sources should be
> encouraged to make available, a significant amount of generated data to
> allow for extensive inspection. Reports of concern should be conveyed
> to the developers and published in a responsible way, especially if the
> concern confirmed and is not properly corrected.
>
> ---
>
> On entropy sources:
>
> Computers, while they make great deterministic state machines, are poor
> are being "spontaneous". They need entropic sources for nearly all
> applications that involve cryptography. In the case of communications,
> each end-point needs their independent and secure channel to a quality
> source.
>
> Looped entropy is a problem for nearly all threat models that involve
> communication. That is, a high quality secure entropy source at one
> endpoint may be of little help if another end point has only poor or
> insecure sources. For example, a cryptographic key exchange between two
> nodes where one side is highly deterministic will likely fail under most
> threat models.
>
> While some random sources may be cryptographically unsound, they MIGHT
> be reasonable sources of entropy provided that the data they produce is
> properly whitened before being used.
>
> Quality sources of entropy are extremely important to nearly all threat
> models that involve cryptography. Providing quality entropy from a
> source that is real-time statically sampled, at a rate that is required
> by the application is very important.
>
> Something for the OpenSSL folk to consider: how many bits of randomness
> is required / consumed by each end when establishing an OpenSSL session
> with reasonable parameters. That in turn will drive the entropy budget
> requirements of each end.
>
> One common flaw we encounter is when entropic data is improperly reused.
> Too often architecture will build a pool, dribble several questionable
> bits of entropy into the pool, extract data, re-mix (in a deterministic
> fashion), extract more data, mix again (more deterministic processing)
> and extract even more data. The resulting data that is extracted is
> highly correlated to the original pool state and the few quasi-entropic
> bits dumped into it.
>
> In a fair number of threat models, entropic data should never be reused.
> Entropic data should be treated like a "one time pad": held in a secure
> container, used once and destroyed. While the entropic data may be
> whitened by some algorithm to satisfy statistical requirements, once
> used the entropic data should NEVER be reused. This principal is
> violated by too many operating system based services. We suspect that
> violating this principal may be of the reasons why services such as
> Yarrow and Linux /dev/random are too often cryptographically unsound for
> many threat models.
>
> Your "j-random" Linux host with your typical /dev/random so-called
> entropy pool, especially where the measurement of the entropic bits is
> suspect, is a real problem. It may be OK to protect against most "kid
> brothers"; it is hardly a match for "big brother". Misapplication of or
> misunderstanding of so called "secure proofs" claiming it is "OK to
> remix and reuse" may, in some fashion, be at the reason why a number of
> architectures fail to be cryptographically sound.
>
> The problem with that "j-random" Linux host is if you forced a more
> conservative approach to the use of entropy, their secure communication
> performance would suffer. When we measured the complete budget a
> certain type of HTTPS connection, several hundred bits were required at
> each end just to start! When you looked at the rate that real entropy
> was introduced to the pool, those several hundred bits too a long time
> to be generated. When this was applied to say, an apache server where a
> bunch of HTTPS sessions were required, the service really sucked.
>
> Sources of entropy to your "j-random" is a problem. And like the joke
> says, entropy is not what it used to be. Code that assumed a wiggly
> Winchester disk drive of the 1980's was entropic will be sadly mistaken
> by today's spinning drives. Look into how the kernel thinks it receives
> from a disk driver. Then attach an SSD. Then sigh. Keyboards, mice,
> network cards, etc. The naive measurement by the kernel code is a big
> problem. The real-time auditing of this so-called entropic data is
> almost non-existent. Then put the system under a load where network
> packets are arriving back-to-back and where the drives are delivering
> data at near maximum rate because the demand for cryptographic sessions
> is peaking due to people who want to use the service. As the load on
> the server approaches saturation, and the demands for entropy reaches a
> peak, the system becomes uncomfortably deterministic and the entropy
> available via the naive entropy pool code actually drops way down. Too
> often when the need is the greatest, the kernels ability to provide
> cryptographically sound data drops way down. *SIGH*
>
> And woe is the case where an OS is running under a hypervisor where
> connection to the true hardware is often indirect! Crypto in the cloud
> is very tricky!
>
> Really know your entropy budget. Have a secure channel to a source that
> can deliver cryptographically sound entropy at a rate that satisfies the
> entropy budget FROM A SOURCES THAT ARE BEING AUDITED IN REAL TIME, and
> use that entropy ONCE via a cryptographically sound whitening function.
> Code your application to react when the API reports that the source
> failed. Test the code that handles the API failure cases. Etc. etc.
> Do that on all end points. If you do, you will be several standard
> deviations of cryptographically soundness beyond most situations in the
> field AND most importantly, you might just be secure enough to meet the
> requirements of your threat model.
>
> ---
>
> Regarding Cipherstream:
>
> There is not enough data in the attached PDF for us to comment on.
> Looks interesting at first glance, however.
>
> ---
>
> Regarding OpenSSL:
>
> OpenSSL has a significant challenge: to operate across a wide variety of
> hardware, firmware, and OS services while attempting to work within an
> amazingly wide variety of threat models. Our hats go off to you for
> attempting to succeed. We believe the effort to try is worth it.
>
> ---
>
> I know is just dumped a lot of issues into this Email reply. Sorry if
> this is an overload, but you did touch on a loaded subject! :-)
>
> How may we (or I) be of service?
>
> chongo (Landon Curt Noll) /\oo/\
>
> ---
>
> _______________________________________________
> Core mailing list
> Core at cryptech.is
> https://lists.cryptech.is/listinfo/core
More information about the Core
mailing list