Coder Social home page Coder Social logo

Comments (44)

joshuaehill avatar joshuaehill commented on August 17, 2024

There were a few changes from 3.0.1 to 3.0.2 that would have the effect of slowing down the output; the default osr was increased from 1 to 3 in the default mode (specifically when the JENT_CONF_DISABLE_LOOP_SHUFFLE macro is defined) and the optimizer is turned off (current setting is -O0; it was set to -O2). There is also a change that fixes the number of loops performed to the previous minimum number of loops that should make things a bit faster; I'd expect a slowdown on the order of 2-3x, which is consistent with your results with one core.

These changes were made after a few folks noticed that the entropy was overreported on some tested platforms (See Issue #21). If you wanted to figure out if either of these contribute to the slowdown you are seeing, it's easy enough to test; you could remove the #define for JENT_CONF_DISABLE_LOOP_SHUFFLE and see how the timing changes, and then recompile with the optimizer enabled by changing the optimizer compiler flag in the Makefile (and removing the #error line that tries to prevent you from enabling the optimizer). I wouldn't suggest actually using the library in an optimized form, but it would help you determine where you are seeing the slowdown.

I don't know how to explain what is going on across multiple cores in rngd; perhaps Stephan has a better insight into that.

from jitterentropy-library.

smuellerDD avatar smuellerDD commented on August 17, 2024

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

@smuellerDD : Keep in mind my measurements are "all things equal" except for the jitterentropy-library version, so I would prefer to focus on the jitterentropy-library change. As @joshuaehill mentioned, I'll look info JENT_CONF_DISABLE_LOOP_SHUFFLE.

from jitterentropy-library.

smuellerDD avatar smuellerDD commented on August 17, 2024

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

Compiling with:

--- rng-tools-6.5/jitterentropy-library/jitterentropy.h.orig	2021-04-25 12:58:10.615690346 -0500
+++ rng-tools-6.5/jitterentropy-library/jitterentropy.h	2021-04-25 12:59:37.025192013 -0500
@@ -78,7 +78,7 @@
  *
  * By enabling this flag, the time of collecting entropy may be enlarged.
  */
-#define JENT_CONF_DISABLE_LOOP_SHUFFLE
+/* #define JENT_CONF_DISABLE_LOOP_SHUFFLE */
 
 /***************************************************************************
  * Jitter RNG State Definition Section

made things worse. 8.6x slower than v3.0.1

The total rngd CPU time (across 4 cores) is now 92.96 seconds.

# /usr/sbin/rngd -t -x hwrng -x rdrand
...
Entropy rate:  12.83 Kbits/sec averaged over 5 iterations for  45.66 seconds

from jitterentropy-library.

joshuaehill avatar joshuaehill commented on August 17, 2024

That is unexpected!

That suggests that SHA3 is somewhat slower on your architecture than on my test machine. When JENT_CONF_DISABLE_LOOP_SHUFFLE is not defined, you have to do on average 1,156.5 hashes per jent_random_data call (which outputs a 256-bit block of conditioned entropy; that is 4.5 SHA3 hashes on average per jent_meansure_jitter and 257 jent_measure_jitter per jent_random_data). When JENT_CONF_DISABLE_LOOP_SHUFFLE is defined you instead need to perform a fixed 1 SHA3 per jent_measure_jitter, and 769 jent_measure_jitter per jent_random_data.

That being said, the rest of the processing overwhelms that difference on my machine, but clearly not on your test machine.

I guess try defining JENT_CONF_DISABLE_LOOP_SHUFFLE and turning optimization back on and see what happens.

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

Compiling with: (previous patch removed)

--- rng-tools-6.5/jitterentropy-library/Makefile.orig	2021-04-25 13:30:54.336767045 -0500
+++ rng-tools-6.5/jitterentropy-library/Makefile	2021-04-25 13:31:29.240760050 -0500
@@ -3,7 +3,7 @@
 CC ?= gcc
 #Hardening
 CFLAGS ?= -fwrapv --param ssp-buffer-size=4 -fvisibility=hidden -fPIE -Wcast-align -Wmissing-field-initializers -Wshadow -Wswitch-enum
-CFLAGS +=-Wextra -Wall -pedantic -fPIC -O0 -fwrapv -Wconversion
+CFLAGS +=-Wextra -Wall -pedantic -fPIC -O2 -fwrapv -Wconversion
 LDFLAGS +=-Wl,-z,relro,-z,now
 
 GCCVERSIONFORMAT := $(shell echo `$(CC) -dumpversion | sed 's/\./\n/g' | wc -l`)
--- rng-tools-6.5/jitterentropy-library/jitterentropy-base.c.orig	2021-04-25 13:35:59.946249588 -0500
+++ rng-tools-6.5/jitterentropy-library/jitterentropy-base.c	2021-04-25 13:36:34.522711644 -0500
@@ -67,10 +67,6 @@
  * None of the following should be altered
  ***************************************************************************/
 
-#ifdef __OPTIMIZE__
- #error "The CPU Jitter random number generator must not be compiled with optimizations. See documentation. Use the compiler switch -O0 for compiling jitterentropy.c."
-#endif
-
 /*
  * JENT_POWERUP_TESTLOOPCOUNT needs some loops to identify edge
  * systems. 100 is definitely too little.

We are almost back to v3.0.1 performance (BTW, this -O2 difference surprised me)

The total rngd CPU time (across 4 cores) is now 19.5 seconds.

# /usr/sbin/rngd -t -x hwrng -x rdrand
...
Entropy rate:  106.7 Kbits/sec averaged over 39 iterations for  56.01 seconds

But sadly, there is now an intermittent startup issue I have never seen before, from rngd

JITTER rng fails with code 10

Failed to init entropy source 5: JITTER Entropy generator

As such rngd typically does not start at system boot, though occasionally will. I never saw this failure with v3.0.1.

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

@joshuaehill :

That is unexpected!
That suggests that SHA3 is somewhat slower on your architecture than on my test machine.

I'm testing on a 1 GHz, 4-core x86_64 box, acting as a Router/Firewall/PBX.

Flags:  fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good acc_power nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt topoext perfctr_nb bpext ptsc perfctr_llc cpb hw_pstate ssbd vmmcall bmi1 xsaveopt arat npt lbrv svm_lock nrip_save tsc_scale flushbyasid decodeassists pausefilter pfthreshold overflow_recov

The newer faster boxes typically have rdrand, so the older, slower boxes is where we need a jitterentropy source.

from jitterentropy-library.

joshuaehill avatar joshuaehill commented on August 17, 2024

The optimizer makes a huge difference in run time performance (i.e., samples per second), but unfortunately recent testing shows that the optimizer also really decreases the variability of the process and tends to encourage repeating patterns (and thus yields a substantial decrease in entropy per sample).

So, your process completes much more quickly with the optimizer enabled, but you get much less entropy per request than was expected, so that's not a great trade-off. :-)

On the processor, that's again surprising, as that's architecturally fairly close to my test platform.

from jitterentropy-library.

joshuaehill avatar joshuaehill commented on August 17, 2024

On your startup issue, the only change between 3.0.1 and 3.0.2 that I can think of that might be related was commit f37124e8ca2005d9feb6eb5ed4bee68c3a85657e. If the calls were very regular (and the 2nd order delta was 1 or less on average) then this would trigger an initialization error. That said, such an error is desirable, as that suggests that the timer is too consistent to be a good source of entropy. This isn't the sort of error that I'd expect to see on a x86_64 system, as the TSC counter is very very fine grained.

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

@smuellerDD :

  1. time ./jitterentropy-lfsroutput 10000 > out

The time command will tell you the amount of time required for the operation.
Now, calculate 320000 / <time returned by time> to get the kbytes / s rate.
Could you return that value?

Our project cross-compiles with a toolchain, but I cheated and natively used Debian 10 to build jitterentropy-lfsroutput and copied the binary to my test box. It ran.

The jitterentropy-lfsroutput test took 432.9 seconds -> 320000 / 432.9 = 739 Kbits/sec

Note this runs in a single core.

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

@joshuaehill :

On your startup issue, the only change between 3.0.1 and 3.0.2 that I can think of that might be related was commit f37124e8ca2005d9feb6eb5ed4bee68c3a85657e.

Compiling with:

-- rng-tools-6.5/jitterentropy-library/jitterentropy-base.c.orig	2021-04-25 16:15:53.553476853 -0500
+++ rng-tools-6.5/jitterentropy-library/jitterentropy-base.c	2021-04-25 16:17:21.521700758 -0500
@@ -1402,7 +1402,7 @@
 	 * than 1 to ensure the entropy estimation
 	 * implied with 1 is preserved
 	 */
-	if ((delta_sum) <= JENT_POWERUP_TESTLOOPCOUNT) {
+	if ((delta_sum) <= 1) {
 		ret = EMINVARVAR;
 		goto out;
 	}

The intermittent startup issue is pretty much solved. Under normal boot conditions it now works.

But, while testing in the foreground, 1/10 times it still fails.

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

The intermittent startup issue is pretty much solved. Under normal boot conditions it now works.

I spoke too soon, it failed on boot.

I would say it is more intermittent with the patch. :-)

from jitterentropy-library.

joshuaehill avatar joshuaehill commented on August 17, 2024

@abelbeck : Would it be possible to output the return value from jent_entropy_init when it fails? That should identify the issue that you are running into.

from jitterentropy-library.

joshuaehill avatar joshuaehill commented on August 17, 2024

As a general matter, it's good that we've identified the likely source some of the problems that you are running into, but the prior value of 1 for the cutoff was clearly wrong (though it may be possible to argue some value less than JENT_POWERUP_TESTLOOPCOUNT is more appropriate). More importantly, disabling the optimizer appears to have been a very important change, and if a choice needs to be made between returning quickly and returning the expected amount of entropy, then it seems to me that the only reasonable choice is to return the expected amount of entropy (even if that might take longer than it previously did).

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

@abelbeck : Would it be possible to output the return value from jent_entropy_init when it fails? That should identify the issue that you are running into.

jent_entropy_init() returns 10

#define ERCT		10 /* RCT failed during initialization */

which is a jent_rct_failure(), which is more complex to track.

So it seems the EMINVARVAR patch was a red herring.

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

Is ec->rct_count initialized somewhere ?

from jitterentropy-library.

joshuaehill avatar joshuaehill commented on August 17, 2024

Yes, but not explicitly (I'm a bit surprised that static analysis doesn't flag this!)

In the jent_entropy_collector_alloc call, the space for the rand_data structure is allocated using jent_zalloc, which (in the last line prior to returning) memsets all the memory used by this structure to '0' bytes. The variable rct_count is an integer, so this will be the value 0.

For this failure, it looks like you are seeing a RCT failure on startup (meaning a run of "stuck" values. In this context "stuck" means that the delta, delta2 (difference of adjacent delta values) or delta3 (difference of adjacent delta2 values) values are 0. Based on the other failure you reported (EMINVARVAR) , I'd guess that you are seeing a bunch of outputs that have delta2 of 0. I've somewhat lost the thread of your testing: are these failures occurring with the optimizer enabled (-O2) or disabled (-O0)?

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

The ERCT failures are with -O2, but I have not tested much with -O0.

I tested again v3.0.1 (which has -O2) and there is never a startup error.

BTW, I never saw a EMINVARVAR error, your suspicions convinced me I did :-)

from jitterentropy-library.

joshuaehill avatar joshuaehill commented on August 17, 2024

Got it. So we still don't know if we are seeing a bunch of delta == 0 (basically not possible on x86-64 using an invariant TSC in a bare metal environment), delta2 == 0 or delta3 == 0.

To be clear, several people noted that with the -O2 flag, the library could fall into consistent patterns of behavior, and the sort of behavior that you are reporting is consistent with that failure mode.

The reason that this wasn't obvious in version 3.0.1 and earlier is that those versions had substantial pseudorandom variation from delta to delta, so historically this style of problem would have been masked unless the underlying hardware counter was so slow that the pseudorandom variation didn't tend to perturb the results. Pseudorandom variation doesn't contribute to entropy, but it does mask failures. You should turn off the optimizer when compiling the library. :-)

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

@joshuaehill : I wonder if a few carefully chosen volatile data types in the inner loop which would allow -O2 for the rest ?

from jitterentropy-library.

smuellerDD avatar smuellerDD commented on August 17, 2024

from jitterentropy-library.

smuellerDD avatar smuellerDD commented on August 17, 2024

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

Using rng-tools 6.5 (last version to support static linking of jitterentropy-library)

The startup time is in this loop:
https://github.com/nhorman/rng-tools/blob/0f525ec3a0260a149cf783ece1b104efb69fa752/rngd_jitter.c#L458

v3.0.1 startup: 4 seconds
v3.0.2 startup: 16 seconds

No entropy is added to the kernel until the /* Make sure all our threads are doing their jobs */ loop is completed.

BTW, the latest rng-tools has a similar startup loop:
https://github.com/nhorman/rng-tools/blob/54c9139b328203c4b8c014039e581fc461628b4e/rngd_jitter.c#L436

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

Followup, the #36 (comment) startup times are with no patches, using default compile options.

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

More testing of v3.0.2 with defaults (ie. -O0) I have never had a failed startup.

The slower entropy gathering of v3.0.2 is not a problem for our use, the 16 second startup (vs. 4) is an issue we would like to solve, if possible.

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

More testing of rngd, the jitter buffer size defaults to 16535 which is filled at startup. If I reduce the buffer by a factor of 4 ...

# /usr/sbin/rngd -t -x hwrng -x rdrand -O jitter:buffer_size:4133 -O jitter:refill_thresh:4133
...
Entropy rate:  28.05 Kbits/sec averaged over 43 iterations for  55.01 seconds

I'm back to a 4 second startup.

Sound like a valid solution ? Was there something special with the 16535 buffer_size ?

from jitterentropy-library.

smuellerDD avatar smuellerDD commented on August 17, 2024

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

@smuellerDD : rngd captures a buffer of entropy and spits it into /dev/random, captures a new buffer, etc. .

from jitterentropy-library.

smuellerDD avatar smuellerDD commented on August 17, 2024

from jitterentropy-library.

smuellerDD avatar smuellerDD commented on August 17, 2024

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

Thanks @smuellerDD for confirming a lower rngd jitter buffer size is a valid solution.

Keeping the same "character" of 16535, I chose ((16535 + 1) / 4) - 1 = 4133

For the v3.0.2 tests below, I patched rngd.c with new 4133 defaults:
(Alternatively add to the rngd command line -O jitter:buffer_size:4133 -O jitter:refill_thresh:4133)

 	[JITTER_OPT_BUF_SZ] = {
 		.key = "buffer_size",
-		.int_val = 16535,
+		.int_val = 4133,
 	},
 	[JITTER_OPT_REFILL] = {
 		.key = "refill_thresh",
-		.int_val = 16535,
+		.int_val = 4133,
 	},

These new rngd defaults with v3.0.2 keeps things working much as before.

Here are some tests:

Proxmox VM 2-core @ 2096 MHz

# timeout 60 /usr/sbin/rngd -t -x hwrng -x rdrand
v3.0.1: Entropy rate:  253.9 Kbits/sec averaged over 58 iterations for  58.01 seconds
v3.0.2: Entropy rate:  58.58 Kbits/sec averaged over 58 iterations for  58.01 seconds
4.3x slower
v3.0.1: startup time: 0.67 seconds
v3.0.2: startup time: 0.70 seconds

AMD GX-412TC SOC 4-core @ 966 MHz

# timeout 60 /usr/sbin/rngd -t -x hwrng -x rdrand
v3.0.1: Entropy rate:  114.7 Kbits/sec averaged over 41 iterations for  55.01 seconds
v3.0.2: Entropy rate:  27.55 Kbits/sec averaged over 46 iterations for  56.01 seconds
4.2x slower
v3.0.1: startup time: 4.2 seconds
v3.0.2: startup time: 4.4 seconds

Intel Atom D525 2-core 4-thread @ 1800 MHz

# timeout 60 /usr/sbin/rngd -t -x hwrng -x rdrand
v3.0.1: Entropy rate:  112.9 Kbits/sec averaged over 50 iterations for  55.01 seconds
v3.0.2: Entropy rate:  22.42 Kbits/sec averaged over 42 iterations for     54 seconds
5.0x slower
v3.0.1: startup time: 4.4 seconds
v3.0.2: startup time: 5.5 seconds

Additionally, since we had some -O2 startup failures with a patched v3.0.2, would the default -O0 solve that problem ? I ran timeout 15 /usr/sbin/rngd -t -x hwrng -x rdrand in a loop for almost an hour (on two boxes) and searched the terminal screens for "fail" and there were no matches. Looks good.

from jitterentropy-library.

joshuaehill avatar joshuaehill commented on August 17, 2024

In my testing, the raw data variability from the library when compiled with -O2 was dramatically lower than it is when compiled with -O0. The raw data distribution is difficult to predict a priori, so I can't say "Failures will never happen when compiled with -O0", but I can say that I expect that failures will happen much much less frequently when compiled with -O0 than when compiled with -O2.

from jitterentropy-library.

smuellerDD avatar smuellerDD commented on August 17, 2024

from jitterentropy-library.

smuellerDD avatar smuellerDD commented on August 17, 2024

from jitterentropy-library.

joshuaehill avatar joshuaehill commented on August 17, 2024

@smuellerDD If you are referring to the introduction of ENTROPY_SAFETY_FACTOR, then I think that it seems completely reasonable to limit this behavior to FIPS mode. I support the use of this "safety factor" within 90C, but practical difference is small. I would think that the health tests have a more significant security impact.

from jitterentropy-library.

smuellerDD avatar smuellerDD commented on August 17, 2024

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

@smuellerDD :

Besides, if you look into my considerations I posted at the rng-tools project, I personally think this buffer should be as small as possible.

Mathematically, I tend to agree, but let's test using this script:

#/bin/sh

for size in 16535 8267 4133 2501 2067 1033 515 257 127 63; do
  echo "$(date +'[%m/%d/%y %H:%M:%S]')  Buffer_Size=$size"

  rstr="$(timeout 600 /usr/sbin/rngd -t -x hwrng -x rdrand -O jitter:buffer_size:$size -O jitter:refill_thresh:$size 2>&1)"

  echo "$rstr" | grep '^Entropy rate:'
  echo ""
done

Output: (Otherwise idle AMD GX-412TC SOC 4-core @ 966 MHz)
Note: I added 2501 later after looking at the rngd.c code noticing a #define FIPS_RNG_BUFFER_SIZE 2500.

[04/27/21 09:43:21]  Buffer_Size=16535
Entropy rate:  28.05 Kbits/sec averaged over 367 iterations for  580.1 seconds

[04/27/21 09:53:33]  Buffer_Size=8267
Entropy rate:  27.87 Kbits/sec averaged over 464 iterations for  592.1 seconds

[04/27/21 10:03:41]  Buffer_Size=4133
Entropy rate:  27.58 Kbits/sec averaged over 551 iterations for  594.1 seconds

[04/27/21 11:32:19]  Buffer_Size=2501
Entropy rate:  25.95 Kbits/sec averaged over 319 iterations for  596.1 seconds

[04/27/21 10:13:44]  Buffer_Size=2067
Entropy rate:  20.22 Kbits/sec averaged over 585 iterations for  597.1 seconds

[04/27/21 10:23:46]  Buffer_Size=1033
Entropy rate:  16.26 Kbits/sec averaged over 299 iterations for  598.1 seconds

[04/27/21 10:33:47]  Buffer_Size=515
Entropy rate:  15.64 Kbits/sec averaged over 478 iterations for  597.1 seconds

[04/27/21 10:43:48]  Buffer_Size=257
Entropy rate:  7.812 Kbits/sec averaged over 238 iterations for  595.1 seconds

[04/27/21 10:53:48]  Buffer_Size=127
Entropy rate:  3.906 Kbits/sec averaged over 118 iterations for  590.1 seconds

[04/27/21 11:03:48]  Buffer_Size=63
Entropy rate:  1.953 Kbits/sec averaged over 58 iterations for  580.1 seconds

So, rngd is more efficient with larger buffers (as Neil seemed to imply with your question).

It seems 2501 is optimal to my eye, I'll try this script on my faster Proxmox VM to make certain.

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

Full set of tests using the shell script above #36 (comment)

My conclusion is to patch rngd.c with:
(Alternatively add to the rngd command line -O jitter:buffer_size:2501 -O jitter:refill_thresh:2501)

 	[JITTER_OPT_BUF_SZ] = {
 		.key = "buffer_size",
-		.int_val = 16535,
+		.int_val = 2501,
 	},
 	[JITTER_OPT_REFILL] = {
 		.key = "refill_thresh",
-		.int_val = 16535,
+		.int_val = 2501,
 	},

Using 2501 is the smallest buffer size (quickest startup) while maintaining near to the top rngd entropy rate.

Here is the test output:

Proxmox VM 2-core @ 2096 MHz

[04/27/21 11:55:53]  Buffer_Size=16535
Entropy rate:  86.51 Kbits/sec averaged over 336 iterations for  596.1 seconds

[04/27/21 12:05:55]  Buffer_Size=8267
Entropy rate:  65.18 Kbits/sec averaged over 401 iterations for  598.1 seconds

[04/27/21 12:15:57]  Buffer_Size=4133
Entropy rate:  58.58 Kbits/sec averaged over 598 iterations for  598.1 seconds

[04/27/21 12:25:57]  Buffer_Size=2501
Entropy rate:  39.05 Kbits/sec averaged over 597 iterations for  597.1 seconds

[04/27/21 12:35:57]  Buffer_Size=2067
Entropy rate:  19.53 Kbits/sec averaged over 597 iterations for  597.1 seconds

[04/27/21 12:45:57]  Buffer_Size=1033
Entropy rate:  14.65 Kbits/sec averaged over 448 iterations for  597.1 seconds

[04/27/21 12:55:58]  Buffer_Size=515
Entropy rate:  7.811 Kbits/sec averaged over 238 iterations for  595.1 seconds

[04/27/21 13:05:58]  Buffer_Size=257
Entropy rate:  3.905 Kbits/sec averaged over 118 iterations for  590.1 seconds

[04/27/21 13:15:58]  Buffer_Size=127
Entropy rate:  1.953 Kbits/sec averaged over 58 iterations for  580.1 seconds

[04/27/21 13:25:58]  Buffer_Size=63
Entropy rate: 0.9763 Kbits/sec averaged over 28 iterations for  560.1 seconds

AMD GX-412TC SOC 4-core @ 966 MHz

[04/27/21 11:59:38]  Buffer_Size=16535
Entropy rate:  27.91 Kbits/sec averaged over 412 iterations for  580.1 seconds

[04/27/21 12:09:51]  Buffer_Size=8267
Entropy rate:  27.69 Kbits/sec averaged over 445 iterations for  591.2 seconds

[04/27/21 12:19:59]  Buffer_Size=4133
Entropy rate:  27.38 Kbits/sec averaged over 533 iterations for  594.1 seconds

[04/27/21 12:30:02]  Buffer_Size=2501
Entropy rate:  24.08 Kbits/sec averaged over 345 iterations for  596.1 seconds

[04/27/21 12:40:05]  Buffer_Size=2067
Entropy rate:  19.59 Kbits/sec averaged over 596 iterations for  597.1 seconds

[04/27/21 12:50:07]  Buffer_Size=1033
Entropy rate:  16.26 Kbits/sec averaged over 299 iterations for  598.1 seconds

[04/27/21 13:00:08]  Buffer_Size=515
Entropy rate:  15.64 Kbits/sec averaged over 478 iterations for  597.1 seconds

[04/27/21 13:10:09]  Buffer_Size=257
Entropy rate:  7.812 Kbits/sec averaged over 238 iterations for  595.1 seconds

[04/27/21 13:20:09]  Buffer_Size=127
Entropy rate:  3.906 Kbits/sec averaged over 118 iterations for  590.1 seconds

[04/27/21 13:30:09]  Buffer_Size=63
Entropy rate:  1.953 Kbits/sec averaged over 58 iterations for  580.1 seconds

Intel Atom D525 2-core 4-thread @ 1800 MHz

[04/27/21 12:00:46]  Buffer_Size=16535
Entropy rate:  23.32 Kbits/sec averaged over 305 iterations for    577 seconds

[04/27/21 12:11:01]  Buffer_Size=8267
Entropy rate:  23.45 Kbits/sec averaged over 398 iterations for    588 seconds

[04/27/21 12:21:08]  Buffer_Size=4133
Entropy rate:  22.94 Kbits/sec averaged over 484 iterations for    595 seconds

[04/27/21 12:31:12]  Buffer_Size=2501
Entropy rate:  19.46 Kbits/sec averaged over 151 iterations for    600 seconds

[04/27/21 12:41:18]  Buffer_Size=2067
Entropy rate:  22.34 Kbits/sec averaged over 516 iterations for  597.1 seconds

[04/27/21 12:51:20]  Buffer_Size=1033
Entropy rate:  16.72 Kbits/sec averaged over 507 iterations for  598.1 seconds

[04/27/21 13:01:22]  Buffer_Size=515
Entropy rate:  15.64 Kbits/sec averaged over 478 iterations for  597.1 seconds

[04/27/21 13:11:23]  Buffer_Size=257
Entropy rate:  7.812 Kbits/sec averaged over 238 iterations for  595.1 seconds

[04/27/21 13:21:23]  Buffer_Size=127
Entropy rate:  3.906 Kbits/sec averaged over 118 iterations for  590.1 seconds

[04/27/21 13:31:23]  Buffer_Size=63
Entropy rate:  1.953 Kbits/sec averaged over 58 iterations for  580.1 seconds

from jitterentropy-library.

smuellerDD avatar smuellerDD commented on August 17, 2024

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

@joshuaehill :

In my testing, the raw data variability from the library when compiled with -O2 was dramatically lower than it is when compiled with -O0. The raw data distribution is difficult to predict a priori, so I can't say "Failures will never happen when compiled with -O0", but I can say that I expect that failures will happen much much less frequently when compiled with -O0 than when compiled with -O2.

Let's test using this script:

#/bin/sh

N=0
good=0
failed=0

while true; do
  N=$((N+1))
  rstr="$(timeout 10 /usr/sbin/rngd -t -x hwrng -x rdrand 2>&1)"

  if echo "$rstr" | grep -q '^JITTER rng fails'; then
    failed=$((failed+1))
  elif echo "$rstr" | grep -q '^Enabling JITTER rng'; then
    good=$((good+1))
  fi
  echo "$(date +'[%m/%d/%y %H:%M:%S]')  N=$N  Success=$good  Failed=$failed"
  sleep 5
done

Using unmodified v3.0.2 and rngd with 2501 buffer_size, run continually for 20 hours, yielding a total rngd startups of 12,279 times across 3 test boxes.

No failures, test data below.

Proxmox VM 2-core @ 2096 MHz

[04/27/21 14:17:20]  N=1  Success=1  Failed=0
...
[04/28/21 10:16:15]  N=4792  Success=4792  Failed=0

AMD GX-412TC SOC 4-core @ 966 MHz

[04/27/21 14:16:49]  N=1  Success=1  Failed=0
...
[04/28/21 10:16:37]  N=3656  Success=3656  Failed=0

Intel Atom D525 2-core 4-thread @ 1800 MHz

[04/27/21 14:16:45]  N=1  Success=1  Failed=0
...
[04/28/21 10:17:04]  N=3831  Success=3831  Failed=0

I'm satisfied to put v3.0.2 into production, together with rngd using 2501 buffer_size.

Much thanks to @smuellerDD and @joshuaehill for all your insights here and in jitterentropy-library.

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

@smuellerDD : Reporting 3.4.0 is ~20% slower vs. 3.3.1 (per the test hardware herein)
I suspect the new SHA-3 state integration is the reason.
No issue or problem, just reporting what I see.

from jitterentropy-library.

smuellerDD avatar smuellerDD commented on August 17, 2024

from jitterentropy-library.

abelbeck avatar abelbeck commented on August 17, 2024

Thank you very much for the report. I noticed it is a bit slower (albeit not by 20%), but I first wanted to err on the safe side. @smuellerDD

I found the startup (filling rngd buffer of size 2501 and waiting for /proc/sys/kernel/random/entropy_avail greater than 256) to be ~20% slower across all systems.

Interestingly, the ongoing entropy production rate rngd -t ... was reduced much less than 20% for most systems, except the Intel Atom D525 2-core 4-thread @ 1800 MHz system which slowed by 20%.

from jitterentropy-library.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.