Is /Dev/Random Considered Truly Random

Is /dev/random considered truly random?

The only thing in this universe that can be considered truly is one based on quantum effects. Common example is radioactive decay. For certain atoms you can be sure only about half-life, but you can't be sure which nucleus will break up next.

About /dev/random - it depends on implementation. In Linux it uses as entropy sources:

The Linux kernel generates entropy
from keyboard timings, mouse
movements, and IDE timings and makes
the random character data available to
other operating system processes
through the special files /dev/random
and /dev/urandom.

Wiki

It means that it is better than algorithmic random generators, but it is not perfect as well. The entropy may not be distributed randomly and can be biased.

This was philosophy. Practice is that on Linux /dev/random is random enough for vast majority of tasks.

There are implementations of random generators that have more entropy sources, including noise on audio inputs, CPU temperature sensors etc. Anyway they are not true.

There is interesting site where you can get Genuine random numbers, generated by radioactive decay.

Is /dev/random considered truly random?

The only thing in this universe that can be considered truly is one based on quantum effects. Common example is radioactive decay. For certain atoms you can be sure only about half-life, but you can't be sure which nucleus will break up next.

About /dev/random - it depends on implementation. In Linux it uses as entropy sources:

The Linux kernel generates entropy
from keyboard timings, mouse
movements, and IDE timings and makes
the random character data available to
other operating system processes
through the special files /dev/random
and /dev/urandom.

Wiki

It means that it is better than algorithmic random generators, but it is not perfect as well. The entropy may not be distributed randomly and can be biased.

This was philosophy. Practice is that on Linux /dev/random is random enough for vast majority of tasks.

There are implementations of random generators that have more entropy sources, including noise on audio inputs, CPU temperature sensors etc. Anyway they are not true.

There is interesting site where you can get Genuine random numbers, generated by radioactive decay.

differences between random and urandom

Using /dev/random may require waiting for the result as it uses so-called entropy pool, where random data may not be available at the moment.

/dev/urandom returns as many bytes as user requested and thus it is less random than /dev/random.

As can be read from the man page:

random

When read, the /dev/random device will only return random bytes within
the estimated number of bits of noise in the entropy pool. /dev/random
should be suitable for uses that need very high quality randomness
such as one-time pad or key generation. When the entropy pool is
empty, reads from /dev/random will block until additional
environmental noise is gathered.

urandom

A read from the /dev/urandom device will not block waiting for more
entropy. As a result, if there is not sufficient entropy in the
entropy pool, the returned values are theoretically vulnerable to a
cryptographic attack on the algorithms used by the driver. Knowledge
of how to do this is not available in the current unclassified
literature, but it is theoretically possible that such an attack may
exist. If this is a concern in your application, use /dev/random
instead.

For cryptographic purposes you should really use /dev/random because of nature of data it returns. Possible waiting should be considered as acceptable tradeoff for the sake of security, IMO.

When you need random data fast, you should use /dev/urandom of course.

Source: Wikipedia page, man page

How random is urandom?

Note 4.5 years later: this is bad advice. See one of these links for details.

If you're generating cryptographic keys on Linux, you want /dev/random, even if it blocks-- you don't need that many bits.

For just about anything else, like generating random test data or unpredictable session IDs, /dev/urandom is fine. There are enough sources of entropy in most systems (timing of keyboard and mouse events, network packets, etc) that the output will be unpredictable.

/dev/random returning always the same sequence

From the man page for read:

Upon successful completion, read(), readv(), and pread() return the number of bytes actually read and placed in the buffer. The system guarantees to read the number of bytes requested if the descriptor references a normal file that has that many bytes left before the end-of-file, but in no other case.

Bottom line: check the return value from read and see how many bytes you actually read - there may not have been enough entropy to generate the number of bytes you requested.

int len = read(fd, buf, LEN);
printf("read() returned %d bytes: ", len);
if (len > 0)
{
uc2hex(str, buf, len);
printf("%s\n", str);
}

Test:

$ ./a.out 
read() returned 16 bytes: c3d5f6a8ee11ddc16f00a0dea4ef237a
$ ./a.out
read() returned 8 bytes: 24e23c57852a36bb
$ ./a.out
read() returned 16 bytes: 4ead04d1eedb54ee99ab1b25a41e735b
$

Did I understand /dev/urandom?

From the urandom manpage:

The random number generator gathers
environmental noise from device
drivers and other sources into an
entropy pool. The generator also
keeps an estimate of the number of
bits of noise in the entropy pool.
From this entropy pool random numbers
are created.

When read, the /dev/random device
will only return random bytes within
the estimated number of bits of noise
in the entropy pool. /dev/random
should be suitable for uses that need
very high quality randomness such
as one-time pad or key
generation. When the entropy pool
is empty, reads from /dev/random will
block
until additional environmental
noise is gathered.

A read from the /dev/urandom device will not block waiting for more
entropy
. As a result, if there is
not sufficient entropy in the entropy
pool, the returned values are
theoretically vulnerable to a
cryptographic attack on the algorithms
used by the driver. Knowledge of how
to do this is not available in the
current unclassified literature, but
it is theoretically possible that such
an attack may exist. If this is a
concern in your application, use
/dev/random instead.

both uses a PRNG, though using environmental data and entropy pool makes it astronomically much more difficult to crack the PRNG, and impossible without also gathering the exact same environmental data.

As a rule of thumb, without specialized expensive hardware that gathers data from, say, quantum events, there is no such thing as true random number generator (i.e. a RNG that generates truly unpredictable number); though for cryptographic purpose, /dev/random or /dev/urandom will suffice (the method used is for a CPRNG, cryptographic pseudo-random number generator).

The entropy pool and blocking read of /dev/random is used as a safe-guard to ensure the impossibility of predicting the random number; if, for example, an attacker exhausted the entropy pool of a system, it is possible, though highly unlikely with today's technology, that he can predict the output of /dev/urandom which hasn't been reseeded for a long time (though doing that would also require the attacker to exhaust the system's ability to collect more entropies, which is also astronomically improbably).

Is random real in programming?

Various platforms provide a pseudo random number generator, which is:

an algorithm for generating a sequence of numbers whose properties approximate the properties of sequences of random numbers.

A lot's already been written here and on other sites why generating a truly random sequence of numbers is hard for machines, see for example Is /dev/random considered truly random?, How can I generate truly (not pseudo) random numbers with C#? and so on.

From Can a computer generate a truly random number? | MIT School of Engineering:

“One thing that traditional computer systems aren’t good at is coin flipping” [...]

There are devices that generate numbers that claim to be truly random. They rely on unpredictable processes like thermal or atmospheric noise rather than human-defined patterns.

/dev/random Extremely Slow?

On most Linux systems, /dev/random is powered from actual entropy gathered by the environment. If your system isn't delivering a large amount of data from /dev/random, it likely means that you're not generating enough environmental randomness to power it.

I'm not sure why you think /dev/urandom is "slower" or higher quality. It reuses an internal entropy pool to generate pseudorandomness - making it slightly lower quality - but it doesn't block. Generally, applications that don't require high-level or long-term cryptography can use /dev/urandom reliably.

Try waiting a little while then reading from /dev/urandom again. It's possible that you've exhausted the internal entropy pool reading so much from /dev/random, breaking both generators - allowing your system to create more entropy should replenish them.

See Wikipedia for more info about /dev/random and /dev/urandom.

Is `/dev/urandom` suitable for simulation purpose?

In the underlying implementation of /dev/urandom is a CSPRNG, the output pool of which has a maximal period of less than 2^(26∗32) − 1, which is then fed into SHA-1 to produce output for /dev/urandom. As such, urandom can obviously produce the amount of random numbers you want, however it can not offer you reproducible results - you will have to cache the sequence you get yourself.

You do not have to worry about what happens when the entropy pool is estimated to be depleted, /dev/urandom will output whatever you request of it. The "theoretical attacks" the urandom(4) man page speaks of are nonexistent. (the "issue" is a huge misunderstanding of what "entropy estimation" is)

Many other PRNGs with large periods exist which reproducible seeding: the Mersenne Twister in C++, xorshift PRNGs, etc. You should be able to adapt any PRNG to the distribution which is suitable for your purposes.



Related Topics



Leave a reply



Submit