Are Exceptions in C++ really slow
The main model used today for exceptions (Itanium ABI, VC++ 64 bits) is the Zero-Cost model exceptions.
The idea is that instead of losing time by setting up a guard and explicitly checking for the presence of exceptions everywhere, the compiler generates a side table that maps any point that may throw an exception (Program Counter) to the a list of handlers. When an exception is thrown, this list is consulted to pick the right handler (if any) and stack is unwound.
Compared to the typical if (error)
strategy:
- the Zero-Cost model, as the name implies, is free when no exceptions occur
- it costs around 10x/20x an
if
when an exception does occur
The cost, however, is not trivial to measure:
- The side-table is generally cold, and thus fetching it from memory takes a long time
- Determining the right handler involves RTTI: many RTTI descriptors to fetch, scattered around memory, and complex operations to run (basically a
dynamic_cast
test for each handler)
So, mostly cache misses, and thus not trivial compared to pure CPU code.
Note: for more details, read the TR18015 report, chapter 5.4 Exception Handling (pdf)
So, yes, exceptions are slow on the exceptional path, but they are otherwise quicker than explicit checks (if
strategy) in general.
Note: Andrei Alexandrescu seems to question this "quicker". I personally have seen things swing both ways, some programs being faster with exceptions and others being faster with branches, so there indeed seems to be a loss of optimizability in certain conditions.
Does it matter ?
I would claim it does not. A program should be written with readability in mind, not performance (at least, not as a first criterion). Exceptions are to be used when one expects that the caller cannot or will not wish to handle the failure on the spot, and pass it up the stack. Bonus: in C++11 exceptions can be marshalled between threads using the Standard Library.
This is subtle though, I claim that map::find
should not throw but I am fine with map::find
returning a checked_ptr
which throws if an attempt to dereference it fails because it's null: in the latter case, as in the case of the class that Alexandrescu introduced, the caller chooses between explicit check and relying on exceptions. Empowering the caller without giving him more responsibility is usually a sign of good design.
run-time penalty of C++ try blocks
The answer, as usually, is "it depends".
It depends on how exception handling is implemented by your compiler.
If you're using MSVC and targeting 32-bit Windows, it uses a stack-based mechanism, which requires some setup code every time you enter a try block, so yes, that means you incur a penalty any time you enter such a block, even if no exception is thrown.
Practically every other platform (other compilers, as well as MSVC targeting 64-bit Windows) use a table-based approach where some static tables are generated at compile-time, and when an exception is thrown, a simple table lookup is performed, and no setup code has to be injected into the try blocks.
Cheaper to use IF statements rather than exception handling?
A typical implementation of exception handling will add no more overhead (in terms of speed) to the main stream of execution than if
statements will (and may even add a bit less). With reasonably careful use, it also reduces code clutter, enhancing readability.
IOW, for code where it makes sense at all, it's usually a fairly substantial win with very little downside. The main potential downsides are larger executables, and requirement for run-time support so it's not really suitable for things like device drivers (at least normally).
In what ways do C++ exceptions slow down code when there are no exceptions thown?
There is a cost associated with exception handling on some platforms and with some compilers.
Namely, Visual Studio, when building a 32-bit target, will register a handler in every function that has local variables with non-trivial destructor. Basically, it sets up a try/finally
handler.
The other technique, employed by gcc
and Visual Studio targeting 64-bits, only incurs overhead when an exception is thrown (the technique involves traversing the call stack and table lookup). In cases where exceptions are rarely thrown, this can actually lead to a more efficient code, as error codes don't have to be processed.
Do try/catch blocks hurt performance when exceptions are not thrown?
Check it.
static public void Main(string[] args)
{
Stopwatch w = new Stopwatch();
double d = 0;
w.Start();
for (int i = 0; i < 10000000; i++)
{
try
{
d = Math.Sin(1);
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
}
w.Stop();
Console.WriteLine(w.Elapsed);
w.Reset();
w.Start();
for (int i = 0; i < 10000000; i++)
{
d = Math.Sin(1);
}
w.Stop();
Console.WriteLine(w.Elapsed);
}
Output:
00:00:00.4269033 // with try/catch
00:00:00.4260383 // without.
In milliseconds:
449
416
New code:
for (int j = 0; j < 10; j++)
{
Stopwatch w = new Stopwatch();
double d = 0;
w.Start();
for (int i = 0; i < 10000000; i++)
{
try
{
d = Math.Sin(d);
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
finally
{
d = Math.Sin(d);
}
}
w.Stop();
Console.Write(" try/catch/finally: ");
Console.WriteLine(w.ElapsedMilliseconds);
w.Reset();
d = 0;
w.Start();
for (int i = 0; i < 10000000; i++)
{
d = Math.Sin(d);
d = Math.Sin(d);
}
w.Stop();
Console.Write("No try/catch/finally: ");
Console.WriteLine(w.ElapsedMilliseconds);
Console.WriteLine();
}
New results:
try/catch/finally: 382
No try/catch/finally: 332
try/catch/finally: 375
No try/catch/finally: 332
try/catch/finally: 376
No try/catch/finally: 333
try/catch/finally: 375
No try/catch/finally: 330
try/catch/finally: 373
No try/catch/finally: 329
try/catch/finally: 373
No try/catch/finally: 330
try/catch/finally: 373
No try/catch/finally: 352
try/catch/finally: 374
No try/catch/finally: 331
try/catch/finally: 380
No try/catch/finally: 329
try/catch/finally: 374
No try/catch/finally: 334
On a disadvantage of exceptions in C++
"writes to persistent state" mean roughly "writes to a file" or "writes to a database".
"into a 'commit' phase." means roughly "Doing all the writing at once"
"perhaps where you're forced to obfuscate code to isolate the commit" means roughly "This may make the code hard to read" (Slight misuse of the word "obfuscate" which means to deliberately make something hard to read, while here they mean inadvertantly make it hard to read, but that misuse may have been intentional, for dramatic effect)
Elaborating more: "writes to persistent state" more closely means "Write out, to some permanent media, all the details about this object that would be needed to recreate it". If writing was interrupted by an exception, then those "written out details" (i.e. "persistent state") could contain half the new state and half the old state, leading to an invalid object when it was recreated. Hence writing the state must be done as one uninterruptable act.
Related Topics
Iterative Dfs VS Recursive Dfs and Different Elements Order
How Is the C++ Exception Handling Runtime Implemented
Lambda Expressions as Class Template Parameters
Why Is There No Transform_If in the C++ Standard Library
C/C++ Changing the Value of a Const
Problem with Std::Map::Iterator After Calling Erase()
Override a Member Function with Different Return Type
Force All Classes to Implement/Override a 'Pure Virtual' Method in Multi-Level Inheritance Hierarchy
How to Convert Char* to Lpcwstr
How Define an Array of Function Pointers in C
Are Virtual Destructors Inherited
Are C++ Templates Just MACros in Disguise
C++ Filehandling: Difference Between iOS::App and iOS::Ate
Function Prologue and Epilogue in C
Why Is Modifying a String Through a Retrieved Pointer to Its Data Not Allowed