What Are Some Reasons a Release Build Would Run Differently Than a Debug Build

What are some reasons a Release build would run differently than a Debug build

Surviving the Release Version gives a good overview.

Things I have encountered - most are already mentioned

Variable initialization
by far the most common. In Visual Studio, debug builds explicitly initialize allocated memory to given values, see e.g. Memory Values here. These values are usually easy to spot, cause an out of bounds error when used as an index or an access violation when used as a pointer. An uninitialized boolean is true, however, and may cause uninitialized memory bugs going undetected for years.

In Release builds where memory isn't explicitely initialized it just keeps the contents that it had before. This leads to "funny values" and "random" crashes, but as often to deterministic crashes that require an apparently unrelated command to be executed before the command that actually crashes. This is caused by the first command "setting up" the memory location with specific values, and when the memory locations are recycled the second command sees them as initializations. That's more common with uninitialized stack variables than heap, but the latter has happened to me, too.

Raw memory initialization can also be different in a release build whether you start from visual studio (debugger attached) vs. starting from explorer. That makes the "nicest" kind of release build bugs that never appear under the debugger.

Valid Optimizations come second in my exeprience. The C++ standard allows lots of optimizations to take place which may be surprising but are entirely valid e.g. when two pointers alias the same memory location, order of initialization is not considered, or multiple threads modify the same memory locations, and you expect a certain order in which thread B sees the changes made by thread A. Often, the compiler is blamed for these. Not so fast, young yedi! - see below

Timing Release builds don't just "run faster", for a variety of reasons (optimizations, logging functions providing a thread sync point, debug code like asserts not executed etc.) also the relative timing between operations change dramatically. Most common problem uncovered by that is race conditions, but also deadlocks and simple "different order" execution of message/timer/event-based code. Even though they are timing problems, they can be surprisingly stable across builds and platforms, with reproductions that "work always, except on PC 23".

Guard Bytes. Debug builds often put (more) guard bytes around selected instances and allocations, to protect against index overflows and sometimes underflows. In the rare cases where the code relies on offsets or sizes, e.g. serializing raw structures, they are different.

Other code differences Some instructions - e.g asserts - evaluate to nothing in release builds. Sometimes they have different side effects. This is prevalent with macro trickery, as in the classic (warning: multiple errors)

#ifdef DEBUG
#define Log(x) cout << #x << x << "\n";
#else
#define Log(x)
#endif

if (foo)
Log(x)
if (bar)
Run();

Which, in a release build, evaluates to if (foo && bar)
This type of error is very very rare with normal C/C++ code, and macros that are correctly written.

Compiler Bugs This really never ever happens. Well - it does, but you are for the most part of your career better off assuming it does not. In a decade of working with VC6, I found one where I am still convinced this is an unfixed compiler bug, compared to dozens of patterns (maybe even hundreds of instances) with insufficient understanding of the scripture (a.k.a. the standard).

Release build runs differently than Debug build

Your code appears to have a race condition in it. Start timer first enables the timer then sets isWaitOver to false. The OnTimedEvent when it runs then sets isWaitOver to true. It is a bit unlikely, but on a busy system it may be possible for the timer to fire OnTimedEvent before the main thread gets to setting isWaitOver to false. If this occurs then isWaitOver may always end up appearing to your loop as false. To prevent this put your isWaitOver = false line before aTimer.Enabled = true.

The more likely issue is to do with the optimizer reordering things in your code. It is allowed to do this if a single thread would not notice the difference but it can cause issues in multi-threaded scenarios like this. To resolve this you can either make isWaitOver volatile or put memory barriers in your code. See Threading in C# by Joseph Albahari for a good write up.

Generally though when it gets to the point where volatile and memory barriers are making a difference you have already made your code way to complex and fragile. Memory Barriers are very advanced things that are extremely easy to get wrong and near impossible to test correctly (for instance the behaviour varies depending on the CPU model you are using). My advice would be to switch isWaitOver for a ManualResetEvent and waiting on it to get signalled by the timer thread. This has the added advantage of preventing your code going into a CPU hogging spin loop.

Finally your code has a handle leak. Each time around you are creating a new Timer object, but you are never disposing it again. You can either dispose it before creating a new one as I've shown or simply use one and not keep recreating it.

    ManualResetEvent isWaitOver = new ManualResetEvent(false);

private void Run()
{
foreach (DataRow row in ds.Tables[0].Rows)
{
string Badge = Database.GetString(row, "Badge");
if (Badge.Length > 0)
{
if (Count < Controller.MaximumBadges)
{
if (processed == 100) // Every 100 downloads, pause for a second
{
processed = 0;
StartTimer();
isWaitOver.WaitOne();
Controller.PostRecordsDownloadedOf("Badges", Count);
}

if (Download(Badge, false))
{
Count++;
processed++;
}
}
else
Discarded++;
}
TotalCount++;
}
}

private void StartTimer()
{
// Create a timer with a one second interval.
if (aTimer != null) aTimer.Dispose();
aTimer = new System.Timers.Timer(1000);
// Hook up the Elapsed event for the timer.
isWaitOver.Reset();
aTimer.Elapsed += OnTimedEvent;
aTimer.AutoReset = true;
aTimer.Enabled = true;
}

private void OnTimedEvent(Object source, System.Timers.ElapsedEventArgs e)
{
aTimer.Enabled = false;
isWaitOver.Set();
}

what does mean by debug build and release build, difference and uses

Debug build and release build are just names. They don't mean anything.

Depending on your application, you may build it in one, two or more
different ways, using different combinations of compiler and linker
options. Most applications should only be build in a single version:
you test and debug exactly the same program that the clients use. In
some cases, it may be more practical to use two different builds:
overall, client code needs optimization, for performance reasons, but
you don't want optimization when debugging. And then there are cases
where full debugging (i.e. iterator validation, etc.) may result in code
that is too slow even for algorithm debugging, so you'll have a build
with full debugging checks, one with no optimization, but no iterator
debugging, and one with optimization.

Anytime you start on an application, you have to decide what options you
need, and create the corresponding builds. You can call them whatever
you want.

With regards to external libraries (like wxwidgets): all compilers have
some incompatibilities when different options are used. So people who
deliver libraries (other than in source form) have to provide several
different versions, depending on a number of issues:

  • release vs. debug: the release version will have been compiled with a
    set of more or less standard optimization options (and no iterator
    debugging); the debug version without optimization, and with iterator
    debugging. Whether iterator debugging is present or not is one thing
    which typically breaks binary compatibility. The library vendor should
    document which options are compatible with each version.

  • ANSI vs. Unicode: this probably means narrow char vs wide wchar_t
    for character data. Use which ever one corresponds to what you use in
    your application. (Note that the difference between these two is much
    more than just some compiler switches. You often need radically
    different code, and handling Unicode correctly in all cases is far from
    trivial; an application which truly supports Unicode must be aware of
    things like composing characters or bidirectional writing.)

  • static vs. dynamic: this determines how the library is linked and
    loaded. Usually, you'll want static, at least if you count on deploying
    your application on other machines than the one you develop it on. But
    this also depends on licensing issues: if you need a license for each
    machine where the library is deployed, it might make more sense to use
    dynamic.

Performance differences between debug and release builds

The C# compiler itself doesn't alter the emitted IL a great deal in the Release build. Notable is that it no longer emits the NOP opcodes that allow you to set a breakpoint on a curly brace. The big one is the optimizer that's built into the JIT compiler. I know it makes the following optimizations:

  • Method inlining. A method call is replaced by the injecting the code of the method. This is a big one, it makes property accessors essentially free.

  • CPU register allocation. Local variables and method arguments can stay stored in a CPU register without ever (or less frequently) being stored back to the stack frame. This is a big one, notable for making debugging optimized code so difficult. And giving the volatile keyword a meaning.

  • Array index checking elimination. An important optimization when working with arrays (all .NET collection classes use an array internally). When the JIT compiler can verify that a loop never indexes an array out of bounds then it will eliminate the index check. Big one.

  • Loop unrolling. Loops with small bodies are improved by repeating the code up to 4 times in the body and looping less. Reduces the branch cost and improves the processor's super-scalar execution options.

  • Dead code elimination. A statement like if (false) { /.../ } gets completely eliminated. This can occur due to constant folding and inlining. Other cases is where the JIT compiler can determine that the code has no possible side-effect. This optimization is what makes profiling code so tricky.

  • Code hoisting. Code inside a loop that is not affected by the loop can be moved out of the loop. The optimizer of a C compiler will spend a lot more time on finding opportunities to hoist. It is however an expensive optimization due to the required data flow analysis and the jitter can't afford the time so only hoists obvious cases. Forcing .NET programmers to write better source code and hoist themselves.

  • Common sub-expression elimination. x = y + 4; z = y + 4; becomes z = x; Pretty common in statements like dest[ix+1] = src[ix+1]; written for readability without introducing a helper variable. No need to compromise readability.

  • Constant folding. x = 1 + 2; becomes x = 3; This simple example is caught early by the compiler, but happens at JIT time when other optimizations make this possible.

  • Copy propagation. x = a; y = x; becomes y = a; This helps the register allocator make better decisions. It is a big deal in the x86 jitter because it has few registers to work with. Having it select the right ones is critical to perf.

These are very important optimizations that can make a great deal of difference when, for example, you profile the Debug build of your app and compare it to the Release build. That only really matters though when the code is on your critical path, the 5 to 10% of the code you write that actually affects the perf of your program. The JIT optimizer isn't smart enough to know up front what is critical, it can only apply the "turn it to eleven" dial for all the code.

The effective result of these optimizations on your program's execution time is often affected by code that runs elsewhere. Reading a file, executing a dbase query, etc. Making the work the JIT optimizer does completely invisible. It doesn't mind though :)

The JIT optimizer is pretty reliable code, mostly because it has been put to the test millions of times. It is extremely rare to have problems in the Release build version of your program. It does happen however. Both the x64 and the x86 jitters have had problems with structs. The x86 jitter has trouble with floating point consistency, producing subtly different results when the intermediates of a floating point calculation are kept in a FPU register at 80-bit precision instead of getting truncated when flushed to memory.

Possible causes for different behavior between MS Visual C++ release and debug builds

In general you will find that a debug executable is different from a release executable in in a couple of ways. This is obvious just by looking at the file sizes of the executables. Possible causes include

  • Debug may have additional code compiled in to do error checking
  • Debug generally has optimization disabled, while Release is likely to have a high level.
  • Debug may default initialize automatic variables to zero, while Release does not initialize at all.
  • Debug may carry additional information to associate the executable code with the source code, while Release does not.

My best advice is to initialize all of your variables, even if it seems stupid. Verify that all your memory allocations work, and that you free properly in scope.

The static analyzer comment above is excellent advice. You can get similar benefits by compiling with a newer compiler.

Take a look at CPPCHECK, or one of the many commercial checkers.

I recommend you upgrade to the latest compiler in any case.

Best of Luck.

Evil.

Code is behaving differently in Release vs Debug Mode

Since it seems to be floating point related there are so many things that can go wrong. See:
C# - Inconsistent math operation result on 32-bit and 64-bit
and
Double precision problems on .NET

There are so many things that can be trashed with floating points. And comparing floats for equality is a general no-no. You chould check the difference smaller than a reasonably epsilon.

What is the difference between Release and Debug modes in Visual Studio?

Well, it depends on what language you are using, but in general they are 2 separate configurations, each with its own settings. By default, Debug includes debug information in the compiled files (allowing easy debugging) while Release usually has optimizations enabled.

As far as conditional compilation goes, they each define different symbols that can be checked in your program, but they are language-specific macros.

Release build changes behavior when run outside of the debugger

Here's a couple of things to try:

  • Run outside the debugger but then attach to the process afterwards. When the process is started from the debugger it will have a slightly different environment, so if that is the cause of the different behaviour then this will allow you to debug it
  • Create a release build with optimisation turned off and see if you get the same behaviour running inside and outside of the debugger. If you can still reproduce the issue then this will make debugging it (by using the above Attach Process method) a lot easier


Related Topics



Leave a reply



Submit