[Cryptech Tech] RNG test tools wiki page
Bernd Paysan
bernd at net2o.de
Fri Aug 1 20:58:04 UTC 2014
Am Donnerstag, 31. Juli 2014, 13:15:18 schrieb Benedikt Stockebrand:
> Hi Bernd and list,
>
> Bernd Paysan <bernd at net2o.de> writes:
> > A note for the dieharder tests: Non-deterministic inputs (that's what we
> > are interested in!) still give non-deterministic outputs. That means,
> > all those test results do vary, and must vary. Dieharder's tests are
> > weighted in a way that the chance of missing it with normal randomness is
> > 1/1000. Given that dieharder has a lot of tests, and you are going to
> > test it with a lot of data (again and again), you should simply expect to
> > have that failure rate.
> correct.
>
> > IMHO, dieharder -a should collect all the results and do a chi-square
> > tests on them, because if the data is random, and the weighting is
> > correct, the results should all be distributed with a known distribution.
>
> No, that's not a good idea. If any single test fails it means that it
> has with high likelyhood discovered some sort of pattern or
> nonrandomness in the input. But that doesn't mean it will also be
> discovered by other tests.
If a single test fails with a 0.1% likelyhood, and you have a total number of
tests that is already >100, this is nothing to worry too much about. Take an
independent data sample, and test again - if it now fails twice with the same
test, the likelyhood is 1ppm, and you should start to worry.
> As a simplified example: If you had one test for bit bias (what ent
> calls "entropy") and a second for correlation between bits 8 bits apart
> (which might be interesting for a bytewise generator) then it takes two
> different kinds of patterns overlaid in the input to make both tests
> fail.
When you have tests which test independent properties, then you would expect
to have the results stochastically independent, as well. I.e. a 0.1%
likelyhood statistical failure in bias will not correlate with failures in a
by-8-apart pattern test (more generally: If you transform a random pattern by
an FFT, the result vectors are all stochastically independent, and therefore
themselves will have an expected distribution).
There are tests in dieharder which are sensitive to very similar things, and
will have corellated results. Putting these tests together into a too tough
meta-test wouldn't be a good idea. Putting independent tests (tests which test
different properties of randomness) into a meta-test IMHO is good.
You'll also expect that the same tests run on an independent data sample will
have independent results.
--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://bernd-paysan.de/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: <https://lists.cryptech.is/archives/tech/attachments/20140801/709a2260/attachment.sig>
More information about the Tech
mailing list