/Dev/Random Extremely Slow

/dev/random Extremely Slow?

On most Linux systems, /dev/random is powered from actual entropy gathered by the environment. If your system isn't delivering a large amount of data from /dev/random, it likely means that you're not generating enough environmental randomness to power it.

I'm not sure why you think /dev/urandom is "slower" or higher quality. It reuses an internal entropy pool to generate pseudorandomness - making it slightly lower quality - but it doesn't block. Generally, applications that don't require high-level or long-term cryptography can use /dev/urandom reliably.

Try waiting a little while then reading from /dev/urandom again. It's possible that you've exhausted the internal entropy pool reading so much from /dev/random, breaking both generators - allowing your system to create more entropy should replenish them.

See Wikipedia for more info about /dev/random and /dev/urandom.

Slow performance using /dev/random in docker desktop WSL2

Before applying any of these solutions, check if missing of entropy is your real problem ... To do that execute these commands (in your docker host and in your container):

cat /proc/sys/kernel/random/entropy_avail

It should return a number greater that 1000 ...

dd if=/dev/random of=/dev/null bs=1024 count=1 iflag=fullblock

It should return fast! (Sources: haveged and rng-tools)

Solutions:

For Windows Users (those of you that run DockerDestop for Windows):

  1. Keep using the WSL1 engine with Docker Desktop.
  2. If the previous solution is not possible, execute this:

docker pull harbur/haveged

docker run --privileged -d harbur/haveged

Explanation: This will run a docker container that executes the haveged daemon/process as CMD. Such process, plus --privileged flag, will feed your host /dev/random with entropy, avoiding blocking issues.

For Linux users (those running Linux as docker host):

  1. Map as a volume/mount-point your host's /dev/urandom to your container's /dev/random. This will trick your container, and when it use /dev/random, it will be using your host's /dev/urandom, which never blocks by design. People may argue that's insecure, but that is out the scope of this question.

  2. Install in your docker host, a software that increments the entropy pool, like haveged or rng-tools (if you have a hardware TRNG)

Final thoughts and conclusions:

  1. /dev/random and /dev/urandom in a docker container point to /dev/random and /dev/urandom of the docker host. I don't have any documentation that backups this, except these: Missing Entropy and How docker handles /dev/(u)random request ... and the experimental fact that if I access the WSL2 docker-desktop-distro (using wsl -d docker-desktop) and I execute the dd command described previously, I can see how the entropy is reduced both in the host and the container (and viceversa) ... This is why using solutions, like deploying the haveged container or installing haveged in the docker host, work.

  2. According to haveged link, such software is deprecated because its logic is now included in linux kernels v5.6 ... This could mean that if your docker host is running a Linux Kernel equals or greater to the version 5.6, you won't need to do anything of this because /dev/random will never block.

  3. I tried to install haveged in the WSL2 docker distro (docker-desktop), but such distro does not allow you to execute apt-get ...

Slow service response Times : Java SecureRandom & /dev/random

I could not check your OpenJDK concrete version, but I could check jdk6-b33.

SecureRandom uses SeedGenerator to get the seed bytes

public byte[] engineGenerateSeed(int numBytes) {
byte[] b = new byte[numBytes];
SeedGenerator.generateSeed(b);
return b;
}

SeedGenerator gets the seedSource (String) from SunEntries

String egdSource = SunEntries.getSeedSource();

SunEntries tries to get the source from the system property java.security.egd first, if is not found then tries to get the property securerandom.source from the java.security properties file, if the property is not found returns a blank string.

// name of the *System* property, takes precedence over PROP_RNDSOURCE
private final static String PROP_EGD = "java.security.egd";
// name of the *Security* property
private final static String PROP_RNDSOURCE = "securerandom.source";

final static String URL_DEV_RANDOM = "file:/dev/random";
final static String URL_DEV_URANDOM = "file:/dev/urandom";

private static final String seedSource;

static {
seedSource = AccessController.doPrivileged(
new PrivilegedAction<String>() {

public String run() {
String egdSource = System.getProperty(PROP_EGD, "");
if (egdSource.length() != 0) {
return egdSource;
}
egdSource = Security.getProperty(PROP_RNDSOURCE);
if (egdSource == null) {
return "";
}
return egdSource;
}
});
}

the SeedGenerator check this value to initialize the instance

// Static instance is created at link time
private static SeedGenerator instance;

private static final Debug debug = Debug.getInstance("provider");

final static String URL_DEV_RANDOM = SunEntries.URL_DEV_RANDOM;
final static String URL_DEV_URANDOM = SunEntries.URL_DEV_URANDOM;

// Static initializer to hook in selected or best performing generator
static {
String egdSource = SunEntries.getSeedSource();

// Try the URL specifying the source
// e.g. file:/dev/random
//
// The URL file:/dev/random or file:/dev/urandom is used to indicate
// the SeedGenerator using OS support, if available.
// On Windows, the causes MS CryptoAPI to be used.
// On Solaris and Linux, this is the identical to using
// URLSeedGenerator to read from /dev/random

if (egdSource.equals(URL_DEV_RANDOM) || egdSource.equals(URL_DEV_URANDOM)) {
try {
instance = new NativeSeedGenerator();
if (debug != null) {
debug.println("Using operating system seed generator");
}
} catch (IOException e) {
if (debug != null) {
debug.println("Failed to use operating system seed "
+ "generator: " + e.toString());
}
}
} else if (egdSource.length() != 0) {
try {
instance = new URLSeedGenerator(egdSource);
if (debug != null) {
debug.println("Using URL seed generator reading from "
+ egdSource);
}
} catch (IOException e) {
if (debug != null)
debug.println("Failed to create seed generator with "
+ egdSource + ": " + e.toString());
}
}

// Fall back to ThreadedSeedGenerator
if (instance == null) {
if (debug != null) {
debug.println("Using default threaded seed generator");
}
instance = new ThreadedSeedGenerator();
}
}

if the source is

final static String URL_DEV_RANDOM = "file:/dev/random";

or

final static String URL_DEV_URANDOM = "file:/dev/urandom"

uses the NativeSeedGenerator, on Windows tries to use the native CryptoAPI on Linux the class simply extends the SeedGenerator.URLSeedGenerator

package sun.security.provider;

import java.io.IOException;

/**
* Native seed generator for Unix systems. Inherit everything from
* URLSeedGenerator.
*
*/
class NativeSeedGenerator extends SeedGenerator.URLSeedGenerator {

NativeSeedGenerator() throws IOException {
super();
}

}

and call to the superclass constructor who loads /dev/random by default

URLSeedGenerator() throws IOException {
this(SeedGenerator.URL_DEV_RANDOM);
}

so, OpenJDK uses /dev/random by default until you do not set another value in the system property java.security.egd or in the property securerandom.source of security properties file.

If you want to see the read results using strace you can change the command line and add the trace=open,read expression

sudo strace -o a.strace -f -e trace=open,read java class

the you can see something like this (I did the test with Oracle JDK 6)

13225 open("/dev/random", O_RDONLY)     = 8
13225 read(8, "@", 1) = 1
13225 read(3, "PK\3\4\n\0\0\0\0\0RyzB\36\320\267\325u\4\0\0u\4\0\0 \0\0\0", 30) = 30
....
....

The Tomcat Wiki section for faster startup suggest using a non-blocking entropy source like /dev/urandom if you are experiencing delays during startup

More info: https://wiki.apache.org/tomcat/HowTo/FasterStartUp#Entropy_Source

Hope this helps.

Why is generating a higher amount of random data much slower?


Why is this?

Generating {1..99999999} 100000000 arguments and then parsing them requires a lot of memory allocation from bash. This significantly stalls the whole system.

Additionally, large chunks of data are read from /dev/urandom, and about 96% of that data are filtered out by tr -dc '0-9'. This significantly depletes the entropy pool and additionally stalls the whole system.

Is the data buffered somewhere?

Each process has its own buffer, so:

  • cat /dev/urandom is buffering
  • tr -dc '0-9' is buffering
  • fold -w 5 is buffering
  • head -n 1 is buffering
  • the left side of pipeline - the shell, has its own buffer
  • and the right side - | cat has its own buffer

That's 6 buffering places. Even ignoring input buffering from head -n1 and from the right side of the pipeline | cat, that's 4 output buffers.

Also, save animals and stop cat abuse. Use tr </dev/urandom, instead of cat /dev/urandom | tr. Fun fact - tr can't take filename as a argument.

Is there a way to optimize this, so that the random numbers are piped/streamed into cat immediately?

Remove the whole code.

Take only as little bytes from the random source as you need. To generate a 32-bit number you only need 32 bits - no more. To generate a 5-digit number, you only need 17 bits - rounding to 8-bit bytes, that's only 3 bytes. The tr -dc '0-9' is a cool trick, but it definitely shouldn't be used in any real code.

Strangely recently I answered I guess a similar question, copying the code from there, you could:

for ((i=0;i<100000000;++i)); do echo "$((0x$(dd if=/dev/urandom of=/dev/stdout bs=4 count=1 status=none | xxd -p)))"; done | cut -c-5
# cut to take first 5 digits

But that still would be unacceptably slow, as it runs 2 processes for each random number (and I think just taking the first 5 digits will have a bad distribution).

I suggest to use $RANDOM, available in bash. If not, use $SRANDOM if you really want /dev/urandom (and really know why you want it). If not, I suggest to write the random number generation from /dev/urandom in a real programming language, like C, C++, python, perl, ruby. I believe one could write it in awk.

The following looks nice, but still converting binary data to hex, just to convert them to decimal later is a workaround for that shell just can't work with binary data:

count=10;
# take count*4 bytes from input
dd if=/dev/urandom of=/dev/stdout bs=4 count=$count status=none |
# Convert bytes to hex 4 bytes at a time
xxd -p -c 4 |
# Convert hex to decimal using GNU awk
awk --non-decimal-data '{printf "%d\n", "0x"$0}'

How to deal with a slow SecureRandom generator?

If you want true random data, then unfortunately you have to wait for it. This includes the seed for a SecureRandom PRNG. Uncommon Maths can't gather true random data any faster than SecureRandom, although it can connect to the internet to download seed data from a particular website. My guess is that this is unlikely to be faster than /dev/random where that's available.

If you want a PRNG, do something like this:

SecureRandom.getInstance("SHA1PRNG");

What strings are supported depends on the SecureRandom SPI provider, but you can enumerate them using Security.getProviders() and Provider.getService().

Sun is fond of SHA1PRNG, so it's widely available. It isn't especially fast as PRNGs go, but PRNGs will just be crunching numbers, not blocking for physical measurement of entropy.

The exception is that if you don't call setSeed() before getting data, then the PRNG will seed itself once the first time you call next() or nextBytes(). It will usually do this using a fairly small amount of true random data from the system. This call may block, but will make your source of random numbers far more secure than any variant of "hash the current time together with the PID, add 27, and hope for the best". If all you need is random numbers for a game, though, or if you want the stream to be repeatable in future using the same seed for testing purposes, an insecure seed is still useful.

Change the speed of /dev/urandom

You can use pv to (among other things) rate limit data from a pipe:

grep -ao "[01]" /dev/urandom | tr -d \\n | pv -q -L 100

Here I use -L as in "limit" to get an output rate of at most 100 bytes per second. The -q is there to suppress debugging output from pv. For 1000 bytes per second you could use -L 1000 or -L 1k instead.

Why does reading /dev/random byte by byte block so often?

The answer lies in your question:

49 bytes (49 B) copied, 0.000134028 s, 366 kB/s

So it didn't copy 1024 bytes like told it but only a few and then stops. I quess this is the same amount that you would have gotten in the loop before it blocks.

/dev/random is slow because it needs to collect the randomness from different sources and as long as none is available it doesn't output anything.

Use /dev/urandom if you need faster numbers.



Related Topics



Leave a reply



Submit