[Cryptech-Commits] [wiki] branch master updated: Clean up one page whose formatting errors were in my face
git at cryptech.is
git at cryptech.is
Thu Mar 17 04:26:11 UTC 2022
This is an automated email from the git hooks/post-receive script.
sra at hactrn.net pushed a commit to branch master
in repository wiki.
The following commit(s) were added to refs/heads/master by this push:
new 10a65c8 Clean up one page whose formatting errors were in my face
10a65c8 is described below
commit 10a65c848750486cad556c9180abdd8d74604105
Author: Rob Austein <sra at hactrn.net>
AuthorDate: Thu Mar 17 00:25:48 2022 -0400
Clean up one page whose formatting errors were in my face
---
content/RandomnessTesting.md | 29 ++++++++++++-----------------
1 file changed, 12 insertions(+), 17 deletions(-)
diff --git a/content/RandomnessTesting.md b/content/RandomnessTesting.md
index 6d8fdc7..29001c8 100644
--- a/content/RandomnessTesting.md
+++ b/content/RandomnessTesting.md
@@ -21,17 +21,17 @@ Dieharder is by far the most extensive blackbox test suite. However, it is orig
Generally the best approach to use `dieharder` is to first generate an output file, e.g. `random.out` to run the tests on, so `dieharder` can apply all its individual tests to the same data. For a standard test, at least about 14 GB worth of data are needed; more if one of the tests needing large amounts of data returns a suspect result and `dieharder` re-tries the same test with more data.
The command line options I (bs) personally use are `dieharder -g 201 -f random.out -s 1 -Y 1 -k 2 -a`:
- -g 201 -f random.out:: Don't use a compiled-in pseudo RNG but the file `random.out` as input.
- -s 1:: Rewind the input after every test. Without this, successive tests use successive parts of the input file.
- -Y 1:: Keep testing until a definite (in probabilistic terms:-) test result is obtained.
- -k 2:: Use some high precision numerics for the KS test; recommended by the man page.
- -a:: Run all tests.
+* `-g 201 -f random.out`: Don't use a compiled-in pseudo RNG but the file `random.out` as input.
+* `-s 1`: Rewind the input after every test. Without this, successive tests use successive parts of the input file.
+* `-Y 1`: Keep testing until a definite (in probabilistic terms:-) test result is obtained.
+* `-k 2`: Use some high precision numerics for the KS test; recommended by the man page.
+* `-a`: Run all tests.
Additionally, these may be useful for more targeted testing:
- -m <n>:: Multiply the `psamples` value by `n`; good for getting even more reliable results, at the expense of the additional data needed.
- -d <test name/number>:: Perform a specific test.
- -l:: List all available tests by name and number.
- -p <n>:: Set the `psamples` value. See below why you may need this.
+* `-m <n>`: Multiply the `psamples` value by `n`; good for getting even more reliable results, at the expense of the additional data needed.
+* `-d <test name/number>`: Perform a specific test.
+* `-l`: List all available tests by name and number.
+* `-p <n>`: Set the `psamples` value. See below why you may need this.
### Interpretation of Results
The way `dieharder` works, it simply returns a clear assessment of the test results; interpretation should be immediately obvious.
@@ -56,12 +56,13 @@ They generally work on blocks of 20000 bits.
### Usage
The `rngtest` program reads data from its standard input and by default returns a statistics overview when it reaches EOF. This can be changed with these two options (among others):
- -c <n>:: Stop running after `n` blocks.
- -b <n>:: Give intermediate results every `n` blocks.
+* `-c <n>`: Stop running after `n` blocks.
+* `-b <n>:` Give intermediate results every `n` blocks.
Use at least one of these when running on a pipe or device...
### Interpretation of Results
Since `rngtest` works on rather small sample sizes it causes a significant number of false alarms:
+
| Test | Expected failure rate |
|---|---|
| Total | 800ppm |
@@ -74,9 +75,3 @@ Since `rngtest` works on rather small sample sizes it causes a significant numbe
These failure rates were however measured experimentally rather than derived from the algorithms themselves, so caveat utilitor.
Seriously flawed inputs often show excessive failures from very small input; it is generally a good idea to keep testing until at least about 100 failures in total have occurred before seriously comparing the measured results to the expected failure rates from the table.
-
-
-
-
-
-
--
To stop receiving notification emails like this one, please contact
the administrator of this repository.
More information about the Commits
mailing list