Why Is the Use of Rand() Considered Bad

Why is the use of rand() considered bad?

There are two parts to this story.

First, rand is a pseudorandom number generator. This means it depends on a seed. For a given seed it will always give the same sequence (assuming the same implementation). This makes it not suitable for certain applications where security is of a great concern. But this is not specific to rand. It's an issue with any pseudo-random generator. And there are most certainly a lot of classes of problems where a pseudo-random generator is acceptable. A true random generator has its own issues (efficiency, implementation, entropy) so for problems that are not security related most often a pseudo-random generator is used.

So you analyzed your problem and you conclude a pseudo-random generator is the solution. And here we arrive to the real troubles with the C random library (which includes rand and srand) that are specific to it and make it obsolete (a.k.a.: the reasons you should never use rand and the C random library).

  • One issue is that it has a global state (set by srand). This makes it impossible to use multiple random engines at the same time. It also greatly complicates multithreaded tasks.

  • The most visible problem of it is that it lacks a distribution engine: rand gives you a number in interval [0 RAND_MAX]. It is uniform in this interval, which means that each number in this interval has the same probability to appear. But most often you need a random number in a specific interval. Let's say [0, 1017]. A commonly (and naive) used formula is rand() % 1018. But the issue with this is that unless RAND_MAX is an exact multiple of 1018 you won't get an uniform distribution.

  • Another issue is the Quality of Implementation of rand. There are other answers here detailing this better than I could, so please read them.

In modern C++ you should definitely use the C++ library from <random> which comes with multiple random well-defined engines and various distributions for integer and floating point types.

Random integers in C, how bad is rand()%N compared to integer arithmetic? What are its flaws?

Due to the way modulo arithmetic works if N is significant compared to RAND_MAX doing %N will make it so you're considerably more likely to get some values than others. Imagine RAND_MAX is 12, and N is 9. If the distribution is good then the chances of getting one of 0, 1, or 2 is 0.5, and the chances of getting one of 3, 4, 5, 6, 7, 8 is 0.5. The result being that you're twice as likely to get a 0 instead of a 4. If N is an exact divider of RAND_MAX this distribution problem doesn't happen, and if N is very small compared to RAND_MAX the issue becomes less noticeable. RAND_MAX may not be a particularly large value (maybe 2^15 - 1), making this problem worse than you may expect. The alternative of doing (rand() * n) / (RAND_MAX + 1) also doesn't give an even distribution, however, it will be every mth value (for some m) that will be more likely to occur rather than the more likely values all being at the low end of the distribution.

If N is 75% of RAND_MAX then the values in the bottom third of your distribution are twice as likely as the values in the top two thirds (as this is where the extra values map to)

The quality of rand() will depend on the implementation of the system that you're on. I believe that some systems have had very poor implementation, OS Xs man pages declare rand obsolete. The Debian man page says the following:

The versions of rand() and srand() in the Linux C Library use the same
random number generator as random(3) and srandom(3), so the lower-order
bits should be as random as the higher-order bits. However, on older
rand() implementations, and on current implementations on different
systems, the lower-order bits are much less random than the higher-
order bits. Do not use this function in applications intended to be
portable when good randomness is needed. (Use random(3) instead.)

Is it acceptable to use rand() for cryptographically insecure random numbers?

Yes, it is fine to use rand() to get pseudo-random numbers. In fact, that is the whole point of rand(). For simple tasks, where it is OK to be deterministic, you can even seed the system clock for simplicity.

Can rand() really be this bad?

Have you tried arc4random()? It provides much stronger and more uniform random values than rand().

You could also try SecRandomCopyBytes(), which is part of Security.framework. This basically just reads from /dev/random.

Why is the new random library better than std::rand()?

Pretty much any implementation of "old" rand() use an LCG; while they are generally not the best generators around, usually you are not going to see them fail on such a basic test - mean and standard deviation is generally got right even by the worst PRNGs.

Common failings of "bad" - but common enough - rand() implementations are:

  • low randomness of low-order bits;
  • short period;
  • low RAND_MAX;
  • some correlation between successive extractions (in general, LCGs produce numbers that are on a limited number of hyperplanes, although this can be somehow mitigated).

Still, none of these are specific to the API of rand(). A particular implementation could place a xorshift-family generator behind srand/rand and, algoritmically speaking, obtain a state of the art PRNG with no changes of interface, so no test like the one you did would show any weakness in the output.

Edit: @R. correctly notes that the rand/srand interface is limited by the fact that srand takes an unsigned int, so any generator an implementation may put behind them is intrinsically limited to UINT_MAX possible starting seeds (and thus generated sequences). This is true indeed, although the API could be trivially extended to make srand take an unsigned long long, or adding a separate srand(unsigned char *, size_t) overload.


Indeed, the actual problem with rand() is not much of implementation in principle but:

  • backwards compatibility; many current implementations use suboptimal generators, typically with badly chosen parameters; a notorious example is Visual C++, which sports a RAND_MAX of just 32767. However, this cannot be changed easily, as it would break compatibility with the past - people using srand with a fixed seed for reproducible simulations wouldn't be too happy (indeed, IIRC the aforementioned implementation goes back to Microsoft C early versions - or even Lattice C - from the mid-eighties);
  • simplistic interface; rand() provides a single generator with the global state for the whole program. While this is perfectly fine (and actually quite handy) for many simple use cases, it poses problems:

    • with multithreaded code: to fix it you either need a global mutex - which would slow down everything for no reason and kill any chance of repeatability, as the sequence of calls becomes random itself -, or thread-local state; this last one has been adopted by several implementations (notably Visual C++);
    • if you want a "private", reproducible sequence into a specific module of your program that doesn't impact the global state.

Finally, the rand state of affairs:

  • doesn't specify an actual implementation (the C standard provides just a sample implementation), so any program that is intended to produce reproducible output (or expect a PRNG of some known quality) across different compilers must roll its own generator;
  • doesn't provide any cross-platform method to obtain a decent seed (time(NULL) is not, as it isn't granular enough, and often - think embedded devices with no RTC - not even random enough).

Hence the new <random> header, which tries to fix this mess providing algorithms that are:

  • fully specified (so you can have cross-compiler reproducible output and guaranteed characteristics - say, range of the generator);
  • generally of state-of-the-art quality (from when the library was designed; see below);
  • encapsulated in classes (so no global state is forced upon you, which avoids completely threading and nonlocality problems);

... and a default random_device as well to seed them.

Now, if you ask me I would have liked also a simple API built on top of this for the "easy", "guess a number" cases (similar to how Python does provide the "complicated" API, but also the trivial random.randint & Co. using a global, pre-seeded PRNG for us uncomplicated people who'd like not to drown in random devices/engines/adapters/whatever every time we want to extract a number for the bingo cards), but it's true that you can easily build it by yourself over the current facilities (while building the "full" API over a simplistic one wouldn't be possible).


Finally, to get back to your performance comparison: as others have specified, you are comparing a fast LCG with a slower (but generally considered better quality) Mersenne Twister; if you are ok with the quality of an LCG, you can use std::minstd_rand instead of std::mt19937.

Indeed, after tweaking your function to use std::minstd_rand and avoid useless static variables for initialization

int getRandNum_New() {
static std::minstd_rand eng{std::random_device{}()};
static std::uniform_int_distribution<int> dist{0, 5};
return dist(eng);
}

I get 9 ms (old) vs 21 ms (new); finally, if I get rid of dist (which, compared to the classic modulo operator, handles the distribution skew for output range not multiple of the input range) and get back to what you are doing in getRandNum_Old()

int getRandNum_New() {
static std::minstd_rand eng{std::random_device{}()};
return eng() % 6;
}

I get it down to 6 ms (so, 30% faster), probably because, unlike the call to rand(), std::minstd_rand is easier to inline.


Incidentally, I did the same test using a hand-rolled (but pretty much conforming to the standard library interface) XorShift64*, and it's 2.3 times faster than rand() (3.68 ms vs 8.61 ms); given that, unlike the Mersenne Twister and the various provided LCGs, it passes the current randomness test suites with flying colors and it's blazingly fast, it makes you wonder why it isn't included in the standard library yet.

Random number generator generating low numbers more frequently than high numbers C++ [duplicate]

For a uniformly distributed random variable E in the open interval [0, 32767]
the probability of mod(E, 10000) < 2800 is around 34%. Intuitively you can think of mod(E, 10000) < 2800 as favouring the bucket of numbers in the range [30000, 32767]: that bucket modulo 10000 is always less than 2800. So that has the effect of pushing the result above 28%.

That's the behavior you are observing here.

It's not a function of the quality of the random generator, although you would get better results if you were to use a uniform generator with a larger periodicity. Using rand() out of your C++ standard library is ill-advised as the standard is too relaxed about the function requirements for it to be portable. <random> from C++11 will cause you far less trouble: you'd be able to avoid explicit % too.



Related Topics



Leave a reply



Submit