Coder Social home page Coder Social logo

jitterentropy-library's Introduction

Hardware RNG based on CPU timing jitter

The Jitter RNG provides a noise source using the CPU execution timing jitter. It does not depend on any system resource other than a high-resolution time stamp. It is a small-scale, yet fast entropy source that is viable in almost all environments and on a lot of CPU architectures.

The implementation of the Jitter RNG is independent of any operating system. As such, it could even run on baremetal without any operating system.

The design of the RNG is given in the documentation found in at http://www.chronox.de/jent . This documentation also covers the full assessment of the SP800-90B compliance as well as all required test code.

API

The API is documented in the man page jitterentropy.3.

To use the Jitter RNG, the header file jitterentropy.h must be included.

Build Instructions

To generate the shared library make followed by make install.

Android

To compile the code on Android, use the following Makefile:

arch/android/Android.mk -- NDK make file template that can be used to directly compile the CPU Jitter RNG code into Android binaries

Direct CPU instructions

If the function in jent_get_nstime is not available, you can replace the jitterentropy-base-user.h with examples from the arch/ directory.

Testing

There are numerous tests around the Jitter RNG. Yet, they are too big to be loaded into the official repository. Email me, if you want them.

Version Numbers

The version numbers for this library have the following schema: MAJOR.MINOR.PATCHLEVEL

Changes in the major number implies API and ABI incompatible changes, or functional changes that require consumer to be updated (as long as this number is zero, the API is not considered stable and can change without a bump of the major version).

Changes in the minor version are API compatible, but the ABI may change. Functional enhancements only are added. Thus, a consumer can be left unchanged if enhancements are not considered. The consumer only needs to be recompiled.

Patchlevel changes are API / ABI compatible. No functional changes, no enhancements are made. This release is a bug fixe release only. The consumer can be left unchanged and does not need to be recompiled.

Author

Stephan Mueller [email protected]

jitterentropy-library's People

Contributors

andrewhop avatar cipherboy avatar ffontaine avatar gktrk avatar idrassi avatar jc2a avatar joshuaehill avatar jpewdev avatar jvdsn avatar kanavin avatar mikhailnov avatar nefigtut avatar nhorman avatar orgads avatar orichtersec avatar rc-matthew-l-weber avatar smuellerdd avatar thillux avatar ynezz avatar yuyinw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jitterentropy-library's Issues

Running tests in user space is quite time consuming.

Hi there,

I just used the invoked_testing.sh to perform a test in user space. But it just stuck at lfsroutput() steps.
I receive no error message so I believe the compile step is OK, I got the executable and a process running on my machine.
I think it is abnormal to take such a long time to perform the test, it took more than 30 minutes and it still cannot finish the first step. So I just want to make sure, how much time it will take to perform the whole test? Since I perform this task on a virtual machine, will this be the reason? If it's not, what could be the reason then?

Thanks in advance.

Overestimation of entropy

I've been looking at the entropy collected by the library and how this is used in OpenWRT's urngd.

My understanding of the code is that jent_read_entropy() returns things in blocks of 256 bit (32 byte). It internally calls jent_measure_jitter(). The code seems to assume that 1 call to jent_measure_jitter() collects 1 bit of entropy, and so that in the case of the oversampling set to 1, as the documentation suggests, you do 256 calls to jent_measure_jitter().

I've run some statistical tests, and it seems that I'm getting less than 0.01 bit of entropy per call of jent_measure_jitter() on a non-idle desktop system. I have no idea why you think you can get 1 bit of entropy out of such a function.

Question about linking jitterentropy with openssl

I have followed the process to link jitterentropy with openssl and successfully built a corresponding rpm with what I believe is a jitterentropy linked openssl rpm.

I would like to know if there is any way to verify that the link is successful aside from the build process working?

[resolved] v3.0.2 regression, much slower gathering entropy vs. v3.0.1

Our project uses rngd and jitterentropy-library when needed. For example on a PC Engines APU2 board (AMD GX-412TC, 1 GHz 4x core).

Our rngd startup script monitors /proc/sys/kernel/random/entropy_avail and waits up to 10 seconds while entropy_avail is less then 256. Throws an error if it takes over 10 seconds.

Using jitterentropy-library v3.0.1, it takes 4-5 seconds for entropy_avail to reach over 256. The total rngd CPU time (across 4 cores) is 17.39 seconds.

# /usr/sbin/rngd -t -x hwrng -x rdrand
...
Entropy rate:  110.8 Kbits/sec averaged over 35 iterations for  55.01 seconds

Using jitterentropy-library v3.0.2, it takes over 10 seconds for entropy_avail to reach over 256. The total rngd CPU time (across 4 cores) is 72.71 seconds.

# /usr/sbin/rngd -t -x hwrng -x rdrand
...
Entropy rate:  27.15 Kbits/sec averaged over 13 iterations for     41 seconds

I find it interesting that it is 4x slower, with a 4 core CPU ... possibly just a coincidence ... or a clue.

Any guidance which commit this regression may have occurred in ?

[SP 800-90B] Having two distinct approaches for the timer source complicates evaluation

When JENT_CONF_ENABLE_INTERNAL_TIMER is defined (it is by default) a pthread-based timer may be used in the event that the initial testing in jent_time_entropy_init fails.

When compiling the library, one can disable this feature by removing this define, and when this define was set in compilation, the user can force this feature using the JENT_FORCE_INTERNAL_TIMER flag when calling jent_time_entropy_init, but for a fixed architecture, one should probably be consistently using or not using this feature (depending on how course the underlying timer is).

This is probably reasonable from a defensive programming standpoint, from from a SP 800-90B assessment strategy (and presumably for any assessment scheme), it is much preferable to be able to force this internal timer feature to be used, or force the feature to not be used.

In the current implementation, if JENT_CONF_ENABLE_INTERNAL_TIMER is set, then evaluating this library must effectively occur twice, once for the internal timer, and once for the hardware timer (and the resulting entropy claim would be the lesser of the two entropies). This doubling of the assessment work would not be necessary if there was a runtime flag to force use of the underlying hardware timers (analogous to the flag to force the use of the internal timer.)

The interpretation of delta_sum in jent_time_entropy_init seems incorrect

delta_sum is a running sum of the absolute value of second-order-delta values. This sum is likely to be large in a correctly functioning system, and is never normalized by dividing it by the number of iterations. In jent_time_entropy_init, this value is interpreted in the block:

	if ((delta_sum) <= 1) {
		ret = EMINVARVAR;
		goto out;
	} 

This can never actually evaluate as true; absent integer rollover, delta_sum evaluates to 0 iff every delta is a fixed value, but in this case the source would have tested as "stuck" earlier, and exited as a result. In order to be 1 (again, absent integer rollover), it would have to be "stuck" for all values other than 1, which would again necessarily have triggered an early exit due to it being "stuck". I suspect that this should instead be compared against JENT_POWERUP_TESTLOOPCOUNT or the delta_sum value should be normalized by dividing it by this value prior to this test.

The meaning of the time delta values is unnecessarily complicated

The default hardware timer on most (non-x86) platforms is constructed using the results of the clock_gettime function. In such cases, the high-order 32-bits is a number of seconds, and the low-order 32 bits is a number of nanoseconds since the start of the current second.

The meaning of the difference of two such 64-bit values is opaque whenever samples happen to include a "nanosecond wrap around", and the lower order 32-bits of the older value is greater than the lower order 32 bits of the newer time value. In such a situation, the standard borrowing behavior of subtraction yields an artificially large value apparent value.

As it turns out, such values can be detected and unambiguously converted into a nanosecond count, but it seems much easier to simply make the internal format of the time sample in this case into a simple number of nanoseconds since the UNIX time epoch, at which point the deltas are also always a number of nanoseconds. As an additional benefit, this makes the interpretation of the meaning of delta2 and delta3 more natural, as well.

[SP 800-90B] Repeated deterministic output patterns as an anticipated failure mode

SP 800-90B Section 4.3 Requirement #8 states:

“The submitter shall provide documentation of any known or suspected noise source failure modes (e.g., the noise source starts producing periodic outputs like 101…01), and shall include developer-defined continuous tests to detect those failures. These should include potential failure modes that might be caused by an attack on the device.”

In the document describing this library, Section 7.2.41, it states that there are no known or suspected noise source failure modes. This may be misleading on some architectures, as the jent_entropy_init function includes start-up health tests for a number of conditions:

  1. The timer value is ever 0 (resulting in a ENOTIME return code).
  2. The delta value is ever 0 (resulting in a ECOARSETIME return code).
  3. The delta value is stuck (that is, the timer value, delta or second delta are fixed between two consecutive delta values) more than 90% of the time (resulting in a ESTUCK return code).
  4. The timer runs backwards more than 3 times over the JENT_POWERUP_TESTLOOPCOUNT iterations (resulting in a ENOMONOTONIC return code).
  5. The delta value is divisible by 100 more than 90% of the time (resulting in a ECOARSETIME return code). This test is generalized in PR #43.
  6. Consecutive delta values are, on average, less than 1 apart from each other (resulting in a EMINVARVAR return code).

On some architectures these failure modes may be profoundly unlikely to occur, and thus one could say that on such architectures these are not anticipated failure modes for that fixed hardware, though these tests continue to exist in any case.

During testing, another important failure mode was encountered several times, and should probably be considered “anticipated”: in certain circumstances (particularly when the optimizer has not been disabled), repeated deterministic patterns of various lengths can emerge. The apparent variation in such deterministic patterns does not contribute entropy, so large numbers of such strings should be detected and an error should be raised.

Failure to compile due to OS CFLAGS containing optimisation setting

Hi

I'm the Alpine Linux package maintainer for jitterentropy-library.

In preparation for the soon-to-be-related 3.1.0 I updated my Alpine packaging to test the Git "master".

The compile is failing with file src/jitterentropy-base.c:

src/jitterentropy-base.c:58:3: error: #error "The CPU Jitter random number generator must not be compiled with optimizations. See documentation. Use the compiler switch -O0 for compiling jitterentropy.c."
58 | #error "The CPU Jitter random number generator must not be compiled with optimizations. See documentation. Use the compiler switch -O0 for compiling jitterentropy.c."

I noticed that "gcc" was being called with both "-Os" (actually it appears twice in the gcc command, both at the start and at the end) and "-O0":

gcc -Os -fomit-frame-pointer -Wextra -Wall -pedantic -fPIC -O0 -fwrapv -Wconversion -fstack-protector-strong -I. -Isrc -Os -fomit-frame-pointer  -c -o src/jitterentropy-timer.o src/jitterentropy-timer.c
src/jitterentropy-base.c:58:3: error: #error "The CPU Jitter random number generator must not be compiled with optimizations. See documentation. Use the compiler switch -O0 for compiling jitterentropy.c."
   58 |  #error "The CPU Jitter random number generator must not be compiled with optimizations. See documentation. Use the compiler switch -O0 for compiling jitterentropy.c."

Looking at the Makefile I changed line 5 from "CFLAGS +=" to "CFLAGS =" so that the Makefile replaced rather than appended to the existing CFLAG settings which resulting in this output:

gcc -Wextra -Wall -pedantic -fPIC -O0 -fwrapv -Wconversion -fstack-protector-strong -I. -Isrc -Os -fomit-frame-pointer  -c -o src/jitterentropy-timer.o src/jitterentropy-timer.c
src/jitterentropy-base.c:58:3: error: #error "The CPU Jitter random number generator must not be compiled with optimizations. See documentation. Use the compiler switch -O0 for compiling jitterentropy.c."
   58 |  #error "The CPU Jitter random number generator must not be compiled with optimizations. See documentation. Use the compiler switch -O0 for compiling jitterentropy.c."

Whilst this change prevented "-Os" appearing at the start of the gcc command options it still however appeared towards the end. I'm confused how this is happening.

So I'm struggling how to ensure that Alpine (abuild) "-Os" setting does not leak into the Makefile. Potentially the same issue could occur with any distribution and so the jitterentropy-library Makefile should manage this somehow.

[SP 800-90B] The library user should be able to force the fips_enabled flag

When the fips_enabled flag is not set, the results of the health tests are never reported. This makes the setting of this flag hugely important on a variety of system that must comply to SP 800-90B, as if this flag is not set, the entropy cannot comply to the SP 800-90B requirements.

The library establishes the setting of this flag within jent_fips_enabled function from either a linked library (openssl or libgcrypt), or by looking for a leading '1' character in "/proc/sys/crypto/fips_enabled". If this file is absent or does not start with a '1', then the flag is cleared.

In standalone use, this only works when run on an OS that provides this interface (modern Linux kernels whose proc fs is mounted in the standard location?) and requires that the underlying kernel must itself be running in FIPS mode. This isn't always the case.

It seems a much easier approach to allow the user to pass in a flag requesting that the APT results actually do something.

[SP 800-90B] The built-in APT and RCT cutoffs do not correspond to the stated false positive rate

The stated desired false positive rate is 2^(-30).

For a presumed entropy lower bound of H=1/osr, this translates to C=1+30*osr for the RCT test . The counting in this library starts at 0 rather than 1, so the corresponding cutoff for this library is 30*osr, rather than 31*osr.

Similarly, for osr=1, the cutoff for the APT is indeed C=325, but again this library starts counting at 0 rather than 1, so the corresponding cutoff for this library should be 324. If we wanted the APT cutoff to scale as the RCT does, then these cutoff for other osr values either should be calculated real time, or alternately hardcoded into a table.

Some additional notes: The APT cutoff formula in 90B has a small problem, so actually this formula should be slightly corrected; this is detailed the SP 800-90B comment 10b here. This does not impact most of these values (though there is a difference of 1 for osr=3 and osr=5)

Also, note that the desired cutoff of the implemented APT test is different than described above (due to issue #46), but I don't see much point in calculating the correct cutoff value for the current (erroneous) implementation.

Add a failure callback on health failure in FIPS mode

In certain applications, it may be desirable to have an action taken on health failure in FIPS mode. A function to create that callback would be helpful. I have a potential commit that I will be pushing up shortly.

restart test analysis script hard-codes the initial entropy estimate to 0.333

The processdata.sh script in validation-restart hard-codes the initial entropy estimate in the execution of ea_restart. H_I should be added as a variable with instructions to update it from earlier results.

Actually, it's probably more complicated than this. Different values of H_I should be used for different bit widths, but a single variable could not do this.

_asm' undeclared (first use in this function)

HELLO
My environment is win10 , and i use arch/jitterentropy-base-x86-windows.h .
when i use minggw compile , it generate err:

           jitterentropy-base-user.h:56:2: error: '_asm' undeclared (first use in this function)
      _asm {
      ^~~~
    jitterentropy-base-user.h:56:2: note: each undeclared identifier is reported only once for each function it appears in
    jitterentropy-base-user.h:56:6: error: expected ';' before '{' token
      _asm {

[SP 800-90B] Start-up health tests performed on data that isn't produced by the noise source used everywhere else

This library uses an approximation for the raw source in jent_time_entropy_init, so it is not clear if the testing performed in this function can be treated as start-up health testing.

Further, because ec.mem is not initialized within jent_time_entropy_init, jent_memaccess never produced significant variation within the noise source while conducting the start-up health tests. This is a problem for two reasons:

  1. The lack of variation produced by jent_memaccess would lead to more start-up failures than should occur.
  2. As jent_memaccess is normally used in the noise source, this makes the functionality tested in jent_time_entropy_init distinct from the normal noise source, so it's not clear that the functionality in jent_time_entropy_init can be treated as a start-up health test for the purposes of SP 800-90B.

If the data tested within jent_time_entropy_init is not deemed start-up health testing, then the JEnt Entropy Source is not compliant with the health testing requirements of SP 800-90B (in particular, Section 4.3 Requirement 4).

Questions on seeeding, value of similar utilities

Hi Stephan. I was wondering if adding more entropy gathering daemons (haveged,twuewand and timer_entropyd) that use cpu jitter as their source too, actually increases the real system entropy? Does the unique algo used by each help?

We are considering twuewand to combat the problem of distros enabling trust of hw cpu rngs which in some cases are broken and output repeating numbers, if malicious outright. We are especially worried about the quality of the initial seed. Does jitterentropy help in this case?

cc/ @adrelanos

[SP 800-90B] The implemented RCT test isn't the described SP 800-90B test

Under the assumption that your raw data is the 64-bit timer delta value, then your paper correctly describes the condition under which the RCT timer should be iterated (the second order delta is 0). This code also iterates the RCT counter when current_delta is 0 (that is, the timer did not increment between timer reads), or when the third order delta is 0.

This makes the implemented RCT test a developer-defined health test (and not an instance of the approved health test, which is how your paper describes it). In validation, this health test must be shown to satisfy SP 800-90B Section 4.5 requirement (a) requirement (which it likely does). This variation also makes the calculated false reject rate (alpha) for this test incorrect.

[SP 800-90B] The variable number of invocations of the conditioner make it questionable if the conditioner is a vetted conditioner

The number of iterations of the SHA-3 primitive is not fixed; for the current code, this varies between 1 and 8 (inclusive, determined mostly pseudo randomly, but with some non-deterministic component). Notably, by FIPS 140-2 IG 7.19 Resolution 7, this entire construction cannot be described as a single vetted conditioning component.

First note that even if we fix the number of iterations at 1, we have to rely on IG7.19 resolutions #6 to describe this construction as a vetted conditioning component (as both the loop count and the prior output is included as inputs to the function).

There are several possible ways of interpreting this functionality in the language of SP 800-90B and IG7.19, and none of them yield an unambiguous and apparently durable conclusion that this conditioning is vetted.

As a general principle, in SP 800-90B, “conditioning function” is deterministic (see SP 800-90B Sections 2.2.2 and 3.1.5). The count of hashes is dependent on a non-deterministic input from jent_loop_shuffle, and the decision as to how to describe this non-deterministic input effectively establishes how this is described in this framework.

Interpretation 1: lfsr_loop_cnt is used to choose one of a set of parameterized conditioning functions (one for each 1 to 8 cycles of SHA-3 hashing). In this case the 2-8 iteration functions would not be vetted conditioning functions (by IG 7.19 Resolution #7). Further, it is not clear that such a non-deterministic "selection from a family on conditioning functions" is actually allowed by SP 800-90B.

Interpretation 2: In SP 800-90B Resolution 3, there is no specific requirement that the conditioning chain length be fixed, so it is perhaps possible to treat this as a chain of 1-8 stages of a vetted conditioning function. It is not clear if this is allowed by SP 800-90B.

Interpretation 3: Any prior hashing (to the extent that it exists) is only generating "supplemental data" (as per IG 7.19 Resolution #6) for the last iteration of SHA-3. This yields a single vetted conditioning function, so I guess that it's the way to go, but I'm pretty sure that this wasn't one of the anticipated uses of the "supplemental data" in IG 7.19, so it isn't clear how durable this interpretation really is.

From the perspective of SP 800-90B, it would be dramatically easier to deal with this conditioning function if the number of iterations was fixed.

[SP 800-90B] The implemented APT test is not quite the SP 800-90B APT test

The flow of the implemented APT test is roughly:

  • The APT state is implicitly initialized when the rand_data structure is zeroed on allocation.
  • The first symbol input causes apt_base_set to be set, and sets apt_count or apt_observations to 0.
  • Once the apt_base_set is set, further insertions increment apt_count if the new symbol equals the apt_base symbol and apt_observations is incremented each time. AFTER apt_observations is incremented, the total number of symbols examined within the window is apt_observations+1 (counting the initial symbol, as directed in 90B). Once apt_observations is greater than or equal to 512, the apt is reset, and the current symbol is made the new reference symbol.

I think that there are two issues here:

  1. The base symbol isn't ever counted toward the window size. This isn't what 90B directs, and the resulting window size is actually (in 90B terms) 513, rather than the desired 512.
  2. For windows other than the first, the last value seen in the prior window establishes the base value in the current window. The 90B APT test consistently uses the first value of the window as the reference value, not the last value from the prior window.

From a technical perspective, I don't think that really has much of an effect, as the test still basically works the same way. From a compliance perspective, this isn't the APT test described in 90B, and so this has to instead be dealt with as a developer-defined health test.

i have a question about pdf7.2.6 (Non-vetted Conditioning Components)

Hello
I see the pdf7.2.6 Components calculation process 。
uses these parameters : hin=64 hin=4096 nw=64 nout=64,

   But when i use SP800-90B_EntropyAssessment tools, it print result nan:
              yy@localhost cpp]$ ./ea_conditioning -v -n 4096 64  64 64 1
              n_in: 4096.000000
              n_out: 64.000000
              nw: 64.000000
              h_in: 64.000000
              h': 1.000000
              
              (Non-vetted) h_out: nan

jitterentropy-hashtime in 3.0.2

Hi Stephan,

Just wanted to confirm something. Starting with 3.0.2, if the flag JENT_CONF_DISABLE_LOOP_SHUFFLE is enabled (and it is by default) the raw output generated by the jitterentropy-hashtime test is no longer separated in a regular case and a lower bound case. In both cases, the OSR is set to the minimum value, 3, and the loop count is set to 1. Is that correct?

If that's true, should the lower bound case force an OSR of 1 to replicate the pre-3.0.2 behavior?

Thank you,
Dragos

[SP 800-90B] The implemented APT test isn't the described SP 800-90B test

In jent_stuck, the APT test is provided with only the lower 4 bits of the raw data rather than the full raw timer delta sample. If the full raw sample is considered to be the full 64-bit timer delta, then this isn't the behavior described in SP 800-90B.

This makes the implemented APT test a developer-defined health test (and not an instance of the approved health test, which is how your paper describes it). In validation, this health test must be shown to satisfy SP 800-90B Section 4.5 requirement (b) requirement (which it likely does). This variation also makes the calculated false reject rate (alpha) for this test incorrect.

If the full raw sample is considered to be the low order nibble of the 64-bit timer delta, then the RCT is instead (additionally) non-compliant for this same reason.

GCD: analysis code misses first time delta?

jent_gcd_analyze seems to miss the analysis of delta_history[0] for the GCD calculation. Thus, shouldn't the following line be added before the for loop?

running_gcd = jent_gcd64(delta_history[0], running_gcd);

rng-tools + jitterentropy always hang if use the internal timer

On some board with coarse timer, it will use the internal timer instead. But it make rngd always hang(rngd -f -x hwrng -x rdrand). And we can reproduce this issue in any board if we force to use the internal timer as below.

diff --git a/jitterentropy-base.c b/jitterentropy-base.c
index 6dbb484..07397cb 100644
--- a/jitterentropy-base.c
+++ b/jitterentropy-base.c
@@ -1387,7 +1387,8 @@ int jent_entropy_init(void)
        if (sha3_tester())
                return EHASH;
 
-       ret = jent_time_entropy_init(0);
+       //ret = jent_time_entropy_init(0);
+       ret = 1;
 
 #ifdef JENT_CONF_ENABLE_INTERNAL_TIMER
        jent_force_internal_timer = 0;
-- 
2.17.1

restart test, why collect two column data

hello,
i have a question need your help , i see two times data (duration, duration_min) are collected in the at function jent_one_test (jitterentropy_lfsrtime.c) , why use two timestarmps, i see 90b describe (1000 * 1000), i understand just need collect one time data.

thank you

v3.0.0 and JENT_CONF_ENABLE_INTERNAL_TIMER

@smuellerDD : I tested v3.0.0 on two boxes that rely on jitterentropy-library as a rngd (rng-tools) source.

Question, should JENT_CONF_ENABLE_INTERNAL_TIMER be disabled if the kernel has CONFIG_HIGH_RES_TIMERS=y ?

Entropy rate test: rngd -t -x hwrng -x rdrand

CPU: Intel(R) Atom(TM) CPU D525 @ 1.80GHz
Linux pbx 4.19.160-astlinux #1 SMP PREEMPT Sat Nov 28 00:12:07 CST 2020 x86_64 GNU/Linux
Kernel Config: CONFIG_HIGH_RES_TIMERS=y

jitterentropy-library: 2.2.0
Entropy rate:  104.6 Kbits/sec averaged over 107 iterations for    155 seconds

jitterentropy-library: 3.0.0 (default w/JENT_CONF_ENABLE_INTERNAL_TIMER)
Entropy rate:  55.63 Kbits/sec averaged over 157 iterations for    185 seconds

jitterentropy-library: 3.0.0 (patched w/o JENT_CONF_ENABLE_INTERNAL_TIMER)
Entropy rate:  113.6 Kbits/sec averaged over 112 iterations for    130 seconds
CPU: AMD GX-412TC SOC @ 1.0GHz (PC Engines APU2)
Linux pbx4 4.19.160-astlinux #1 SMP PREEMPT Fri Nov 27 14:02:58 CST 2020 x86_64 GNU/Linux
Kernel Config: CONFIG_HIGH_RES_TIMERS=y

jitterentropy-library: 2.2.0
Entropy rate:  51.69 Kbits/sec averaged over 66 iterations for    102 seconds

jitterentropy-library: 3.0.0 (default w/JENT_CONF_ENABLE_INTERNAL_TIMER)
Entropy rate:  58.45 Kbits/sec averaged over 123 iterations for    142 seconds

jitterentropy-library: 3.0.0 (patched w/o JENT_CONF_ENABLE_INTERNAL_TIMER)
Entropy rate:  115.1 Kbits/sec averaged over 82 iterations for    103 seconds

Nice improvement for the APU2 box.

Very low raw entropy (runtime) results

Using jitterentropy-library-3.0.2 on x86_64 running Linux. I got my results with recording_userspace (unmodified) and got these results:

0F_4bits.single: min(H_original, 4 X H_bitstring): 0.085401
0F_4bits.var: min(H_original, 4 X H_bitstring): 0.056432
FF_8bits.single: min(H_original, 8 X H_bitstring): 0.085450
FF_8bits.var: min(H_original, 8 X H_bitstring): 0.056459

Is it just me, or do these results seem a bit low? What puzzles me is the conditioned results (with ea_non_iid) look good:
h': 0.876988

Unsigned arithmetic in jent_delta

In the code:

static inline uint64_t jent_delta(uint64_t prev, uint64_t next)
{
	return (prev < next) ? (next - prev) : (UINT64_MAX - prev + 1 + next);
}

the two branches are equivalent.

In C, arithmetic in unsigned types are done mod 1 plus the maximum value representable by that type.. See the relevant ISO for detail, but here's some relevant text from ISO/IEC 9899:2011 Section 6.2.5:

A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type.

A public reference is here

Such language goes back to at least C89, so it seems likely that this behavior is consistently available.

Questions on raw entropy

Hi Stephan,

I have a question about the raw entropy tests described in section 7.1.1 (first dot point) of your paper. The continuous raw test and raw restart tests produce data in two columns that look like this:

20595 15070
59930 15085
...

As I understand it, the first column comes from 2⁰ to 2⁴ iterations of the LFSR, and the second is fixed at 2⁰ iterations of the LFSR; and these are the upper and lower boundaries that you describe in your paper.

I want to analyze this data using the SP800-90B EntropyAssessment tools.

Should I be analyzing these files with two columns as-is, or should I be splitting the upper and lower bounds and analyzing them separately?

Thanks so much for your help Stephan.
Regards,
Andrew

CPU Jitter - Windows Implementation

Hello - Is it possible to implement CPU Jitter in a Windows OS environment e.g. Windows 10? If not at present, are there any plans to implement in Windows in the near future? Rationale would be to provide a NIST SP 800-90B compliant entropy source option in the Windows environment.
Thanks!

Version 3.2.0 fails to compile on Alpine / musl

Hi.

Previous versions of jitterentropy-library compiled fine on Alpine Linux (I'm the maintainer for this package on Alpine) which uses musl as its C library (rather than glibc).

Unfortunately when building jitterentropy-library 3.2.0 as _SC_LEVEL1_DCACHE_SIZE, _SC_LEVEL2_CACHE_SIZE, and _SC_LEVEL3_CACHE_SIZE are not currently defined in musl's header files I get build failures:

gcc -Wextra -Wall -pedantic -fPIC -O0 -fwrapv -Wconversion -fstack-protector-strong -I. -Isrc   -c -o src/jitterentropy-timer.o src/jitterentropy-timer.c
In file included from ./jitterentropy.h:98,
                 from src/jitterentropy-base.c:32:
./jitterentropy-base-user.h: In function 'jent_cache_size_roundup':
In file included from ./jitterentropy.h:98,
                 from src/jitterentropy-noise.h:23,
                 from src/jitterentropy-noise.c:21:
./jitterentropy-base-user.h: In function 'jent_cache_size_roundup':
./jitterentropy-base-user.h:234:20: error: '_SC_LEVEL1_DCACHE_SIZE' undeclared (first use in this function)
  234 |  long l1 = sysconf(_SC_LEVEL1_DCACHE_SIZE);
      |                    ^~~~~~~~~~~~~~~~~~~~~~
./jitterentropy-base-user.h:234:20: error: '_SC_LEVEL1_DCACHE_SIZE' undeclared (first use in this function)
  234 |  long l1 = sysconf(_SC_LEVEL1_DCACHE_SIZE);
      |                    ^~~~~~~~~~~~~~~~~~~~~~
./jitterentropy-base-user.h:234:20: note: each undeclared identifier is reported only once for each function it appears in
./jitterentropy-base-user.h:234:20: note: each undeclared identifier is reported only once for each function it appears in
./jitterentropy-base-user.h:235:20: error: '_SC_LEVEL2_CACHE_SIZE' undeclared (first use in this function)
  235 |  long l2 = sysconf(_SC_LEVEL2_CACHE_SIZE);
      |                    ^~~~~~~~~~~~~~~~~~~~~
./jitterentropy-base-user.h:235:20: error: '_SC_LEVEL2_CACHE_SIZE' undeclared (first use in this function)
  235 |  long l2 = sysconf(_SC_LEVEL2_CACHE_SIZE);
      |                    ^~~~~~~~~~~~~~~~~~~~~
In file included from ./jitterentropy.h:98,
                 from src/jitterentropy-gcd.c:22:
./jitterentropy-base-user.h: In function 'jent_cache_size_roundup':
In file included from ./jitterentropy.h:98,
                 from src/jitterentropy-timer.h:23,
                 from src/jitterentropy-timer.c:22:
./jitterentropy-base-user.h: In function 'jent_cache_size_roundup':
./jitterentropy-base-user.h:234:20: error: '_SC_LEVEL1_DCACHE_SIZE' undeclared (first use in this function)
...
...
make: *** [<builtin>: src/jitterentropy-gcd.o] Error 1
make: *** [<builtin>: src/jitterentropy-timer.o] Error 1
make: *** [<builtin>: src/jitterentropy-health.o] Error 1
make: *** [<builtin>: src/jitterentropy-sha3.o] Error 1

I asked the musl author whether these definitions were left out purposely (as musl tends to avoid adding glibc-specific functionality and concentrates on POSIX compliance) and the feedback was:

i'm not sure how intentional this specific item is, but generally we avoid adding things that have little chance of being portable or meaningful on other systems
iirc it never came up before

So I'm wondering what my next step will be for jitterentropy-library on Alpine...

In `jent_read_entropy_safe` osr is not actually incremented

In jent_read_entropy_safe, the code jent_entropy_collector_alloc(osr++, flags) post-increments the osr value (that is, the value passed into the function is osr, not osr+1). As a consequence, this function never causes the osr value in the collector context to increase.

This can be seen by running the following program:

#include <stdio.h>
int main(void) {
        int x = 0;
        printf("x in call: %d\n", x++);
        printf("x after call: %d\n", x);
        return 0;
}

This results in the following output:

x in call: 0
x after call: 1

This should be fixed by either using a pre-increment (++osr) or (perhaps more clearly) just explicitly adding adding one when that variable is initilized (osr = (*ec)->osr + 1) or when it is called jent_entropy_collector_alloc(osr+1, flags).

Building for Windows with mingw x86_64 issue

Compiling with the mingw64 compiler for x86_64 targets gives a warning:

src/jitterentropy-base-user.h:100:58: warning: left shift count >= width of type [-Wshift-count-overflow]
 # define EAX_EDX_VAL(val, low, high)     ((low) | (high) << 32)

This is easily fixed in jitterentropy-base-user.h, by using 'uint64_t' instead of 'unsigned long':

--- src/jitterentropy-base-user.h
+++ src/jitterentropy-base-user.h
@@ -93,12 +93,12 @@
 #include <mach/mach_time.h>
 #include <unistd.h>
 #endif
 
 #ifdef __x86_64__
-
-# define DECLARE_ARGS(val, low, high)    unsigned long low, high
+#include <stdint.h>
+# define DECLARE_ARGS(val, low, high)    uint64_t low, high
 # define EAX_EDX_VAL(val, low, high)     ((low) | (high) << 32)
 # define EAX_EDX_RET(val, low, high)     "=a" (low), "=d" (high)

The use of jent_loop_shuffle obscures the underlying distribution and its contribution to entropy production isn't easy to assess.

In its normal mode of operation, this library uses jent_loop_shuffle to establish a count of additional loops of hash stages in jent_hash_time, or additional memory access loops in jent_memaccess. This value is established by the current timer value on invocation (Note: not a timer delta) XORed with part of the prior conditioned output.

The prior conditioned output cannot be credited as containing entropy in this use because it was already output (and its entropy was already accounted for). That said, it is the output of SHA-3, so it is pseudorandom. Similarly, the current time at the start of the jent_loop_shuffle presumably has substantial mutual information shared with the delta value about to be generated (for the jent_memaccess use) and that was just generated (for the jent_hash_time use). As such there is probably some information that is non-mutual in this new jen_get_nstime_internal result (and thus this parameter might have non-zero conditional entropy) , but it probably isn't very much.

The eventual raw data (the delta values used within the library) is ultimately distributed according to a parameterized family of distributions (where the parameters are the number of iterations performed within jent_memaccess in the current collection round and the number of hash iterations performed in the prior round) rather than a single fixed distribution.

This is all somewhat complicated, and its not clear how much actual additional entropy arises due to the use of jent_loop_shuffle, but there's nothing really problematic about this from a theoretical perspective. That said, these constructions make accurate entropy assessment much more complicated and laborious. Anyone performing such an assessment is forced to either identify a worst case distribution from the family (which I would generally expect to be the minimum number of loops in each case), or to test them all and take the minimum.

Notably, one cannot just take the delta values generated with the library and perform statistical assessment on them, because the impact of the pseudo-random value integrated from prior output completely obscures any entropy that you actually get from the current timer value, and this pseudo randomness artificially "spreads out" the output distribution without any actual entropy contribution.

As such, when generating H_submitter for a SP 800-90B assessment, you are forced to use the second column of output from your test code; the first column is not amenable to statistical assessment because the choice of selection from which parameterized distribution is being sampled is dominated by an entropy-free but pseudorandom data value.

It would be dramatically easier to assess this source if both of these loops had a fixed number of iterations. In the event where more entropy is required, it seems more reasonable to increase these fixed loop counts.

1-core rng-tools jitterentropy always fails

Hi @smuellerDD , I have done some testing with our AstLinux project, cross-compiled to i586 and x86_64 targets. We are using your latest jitterentropy 9048af7 statically linked in rng-tools 6.4.

Given a few cross-compile fixes, shared with Neil, all is good.

One issue is 1-core (bare metal and VM's) jitterentropy always exits. For example ...

  1. Testing on a Soekris 1-core net5501, 1-core vultr (hosted KVM/QEMU) and 1-core VMware Fusion VM, rngd always exits.
pbx ~ # rngd -f -d -x0 -x1 -x2
Disabling 0: Hardware RNG Device

Disabling 1: TPM RNG Device

Disabling 2: Intel RDRAND Instruction RNG


Initalizing available sources

Limiting thread count to 1 active cpus

JITTER starts 1 threads

Enabling JITTER rng support

JITTER skips thread on cpu 0

Reading entropy from JITTER Entropy generator

...previous 2 lines repeating 10's of times...

No entropy sources working, exiting rngd

JITTER thread on cpu 0 wakes up for refill
(rngd exits)
  1. Testing on a 2-core VMware Fusion VM, rngd keeps running, but /proc/sys/kernel/random/entropy_avail stays around 200.

  2. Testing on a 4-core VMware Fusion VM, rngd keeps running, and /proc/sys/kernel/random/entropy_avail quickly goes over 3000.

  3. Both my Lanner LEC-7220-N4 and PC Engines APU2 default to jitterentropy, each with 4-cores, and works nicely with /proc/sys/kernel/random/entropy_avail over 3000.

Does the laws of physics say 1-core will not converge, should 1-core devices try to use multiple jitterentropy threads ?

Any insight is appreciated.

There is no check for `time_backwards` in `jent_time_entropy_init`

In the 3.0.2 to 3.1.0 restructuring, it looks like the test for "too many backwards running timer values" test was dropped (but the code still tracks this condition, so this removal seems accidental). I think that the following code (copied from v3.0.2) should be added somewhere near the end of jent_time_entropy_init:

        /*
         * we allow up to three times the time running backwards.
         * CLOCK_REALTIME is affected by adjtime and NTP operations. Thus,
         * if such an operation just happens to interfere with our test, it
         * should not fail. The value of 3 should cover the NTP case being
         * performed during our test run.
         */
        if (time_backwards > 3) {
                ret = ENOMONOTONIC;
                goto out;
        }

After updating to 41b25d4dd0f6f5d685f3beb9958a3be7b9e62df0, jent_entropy_init() causes issues

I still don't understand what exactly the problem is, but after updating to the latest, calling jent_entropy_init() in my code causes my hash-table function to not work correctly (some items are no longer found).

If I don't call jent_entropy_init() it's all fine, which makes no sense to me.

I use the jitter functions to add entropy to my RNG, so it's desirable but not critical. The changes causing me problems must be since commit 7db2e41

The test failure cutoff in jent_time_entropy_init isn't the intended cutoff

Exchanging the order of integer multiplication and integer division can produce different results in C, so the cutoff in the code
JENT_POWERUP_TESTLOOPCOUNT/10 * 9 (which evaluates to 918 in this codebase) is not the same as (JENT_POWERUP_TESTLOOPCOUNT*9)/10 (which would evaluate to 921 in this codebase). The desired cutoff appears to be 90% of 1024, which is 921.6; this is closer to the second of these two.

On a related editorial note, the macro JENT_STUCK_INIT_THRES doesn't appear to be used anywhere.

Strange alignment of struct sha_ctx

I noticed the following piece of code.

#define ALIGNED_BUFFER(name, size, type) \
type name[(size + sizeof(type)-1) / sizeof(type)] aligned(sizeof(type));

I am currently running an older version of the library on an ARM with armv5te support.
For some reason the code is compiled with optimization enabled. This result in ldrd (load double word) which results in a alignment warning from the kernel. The unaligned access is repaired. However, it seems strange to me that the structure is created by first allocating an array of element and the actually aliasing a pointer for the structure on top of this. As is performed by the macro shown below. I am aware that this construction may or may not fix the problem on the ARM cpu.

#define SHA_MAX_CTX_SIZE 368
#define HASH_CTX_ON_STACK(name) \
ALIGNED_BUFFER(name ## _ctx_buf, SHA_MAX_CTX_SIZE, uint64_t) \
struct sha_ctx *name = (struct sha_ctx *) name ## _ctx_buf

The code will only insert 1 extra assignment for the pointer aliasing. I checked this by using https://godbolt.org/ which allows selecting different compiler versions and target architectures.

struct sha_ctx {
uint64_t state[25];
size_t msg_len;
unsigned int r;
unsigned int rword;
unsigned int digestsize;
uint8_t partial[SHA3_MAX_SIZE_BLOCK];
};

I am not clear why the struct is not directly created on the stack.
struct sha_ctx my_ctx

This change could be checked by just changing the macro a little bit.
#define HASH_CTX_ON_STACK(name) \
struct sha_ctx name

This should be working on all modern compilers including all gcc from version 8 and up. At least the compiler version I had a look at. The compiler ensures that the struct is correctly aligned on the stack. Optimization will no longer be broken. (Modified: I checked code is compiled with -O2).
I am probably missing some very important thing why this code is as it is. But I am very puzzled by this construction.

Sorry for the inconvenience

Johan

High CPU usage on IMx 6 DL

When using kernel 5.10.72 I see a high CPU usage for the rngd (150 - 194 %) when jitter is enabled disabling jitter support removes the high CPU usage for 5 minutes during boot.

Using latest version of the library.

Does not compile for windows

Previous versions did compile using mingw64 (TDM version).

Now, the base-user.h file line 216 fails because of the sysconf() call.

Allow using alternate 'malloc/free'

I am using an alternative memory allocator, and on Windows I can't reliably override malloc/free.

Can you please use #ifdef JITTER_MALLOC, etc to permit use of an alternate malloc/free?

[SP 800-90B] It is not easy to access raw data.

The raw data in this library would seem to be the current_delta value calculated within the jent_measure_jitter function.

A tester can get analogous data using the helpfully provided jitterentropy-hashtime program in the test code, but this isn't actually the raw data for the final module. Instead, the jitterentropy-hashtime provides a sort of simplification of the actual noise source. I view this as a helpful tool for extracting data that can ultimately be used to support some particular H_submitter claim, but it seems dubious that CMVP/NIST would view this as a reasonable way to extract raw data from the actual entropy source. (As an aside, I know of a CST laboratory that has explicitly decided that the output of the jitterentropy-hashtime program cannot be described as raw data).

As such, it would be useful to have some sort of system in place that provides access to this data in at least some situations. Importantly, using this access method can't change the behavior of the raw data.

There is significant correlation between the timing of `jent_memaccess` updates

The existing pattern used in the jent_memaccess memory update induces timing correlations that make things more complicated to assess. I implemented an alternate approach where the updated address is (statistically) randomly chosen. This approach has several nice theoretical properties which are described in my presentation. I also performed a bunch of comparative testing, the results of which are also in my presentation.

Here are the slides (this week's content starts on slide 46) and here is a recording of the presentation (my talk starts around 9:34).

I'm on vacation for the rest of the week, which, among other things means that I don't have access to my normal development workstation with easy GitHub access, but I did put together a quick proof-of-concept set of changes that you could review; it is here. This version does not instrument the individual updates, but I have another version that does, if you'd like to play with that as well.

I'll put together a proper pull request when I get back home, but this may be enough for you to understand my proposal.

Let me know if you have any questions or comments!

Thanks!

[SP 800-90B] Consider increasing the number of iterations in jent_random_data to support a full entropy claim

The ambient assumption in the library seems to be that the lower bound for the per-delta min entropy is 1/osr, and it appears the intent is to have full-entropy output from the entropy source.

To make a full entropy claim for a m-bit output from a vetted conditioning function, the current draft SP 800-90C requires inputting m+64 bits of min entropy. In this code base, I think that would be accomplished by changing jent_random_data so that the second if statement reads:

#define ENTROPY_SAFETY_FACTOR 64
                if (++k >= ((DATA_SIZE_BITS+ENTROPY_SAFETY_FACTOR) * ec->osr))
                        break;

request to tag a new release

Hey-
I'd like to package jitterentropy as a library for fedora, but there are several fixes I need since the v2.1.1 release. Would it be possible to tag a v2.2.2 release with those fixes included so that I don't have to resort to snapshot package naming?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.