Clang VS Gcc - Optimization Including Operator New

clang vs gcc - optimization including operator new

If we plug this code into godbolt we can see that clang optimizes away the code to this:

main:                                   # @main
movl $1000000, %eax # imm = 0xF4240
ret

while gcc does not perform this optimization. So the question is this a valid optimization? Does this follow the as-if rule, which is noted in the draft C++ standard section 1.9 Program execution which says(emphasis mine):

The semantic descriptions in this International Standard define a
parameterized nondeterministic abstract machine. This International
Standard places no requirement on the structure of conforming
implementations. In particular, they need not copy or emulate the
structure of the abstract machine. Rather, conforming implementations
are required to emulate (only) the observable behavior of the abstract
machine
as explained below.5

where note 5 says:

This provision is sometimes called the “as-if” rule, because an
implementation is free to disregard any requirement of this
International Standard as long as the result is as if the requirement
had been obeyed, as far as can be determined from the observable
behavior of the program. For instance, an actual implementation need
not evaluate part of an expression if it can deduce that its value is
not used and that no side effects affecting the observable behavior of
the program are produced.

Since new could throw an exception which would have observable behavior since it would alter the return value of the program.

R.MartinhoFernandes argues that it is implementation detail when to throw an exception and therefore clang could decide this scenario would not cause an exception and therefore eliding the new call would not violate the as-if rule. This seems like a reasonable argument to me.

but as T.C. points out:

A replacement global operator new could have been defined in a different translation unit

Casey provided an example that shows even when clang sees there is a replacement it still performs this optimization even though there are lost side effects. So this seems like overly aggressive optimization.

Note, memory leaks are not undefined behavior.

Which GCC optimization flags affect binary size the most?

Most of the extra code-size for an un-optimized build is the fact that the default -O0 also means a debug build, not keeping anything in registers across statements for consistent debugging even if you use a GDB j command to jump to a different source line in the same function. -O0 means a huge amount of store/reload vs. even the lightest level of optimization, especially disastrous for code-size on a non-CISC ISA that can't use memory source operands. Why does clang produce inefficient asm with -O0 (for this simple floating point sum)? applies to GCC equally.

Especially for modern C++, a debug build is disastrous because simple template wrapper functions that normally inline and optimize away to nothing in simple cases (or maybe one instruction), instead compile to actual function calls that have to set up args and run a call instruction. e.g. for a std::vector, the operator[] member function can normally inline to a single ldr instruction, assuming the compiler has the .data() pointer in a register. But without inlining, every call-site takes multiple instructions1


Options that affect code-size in the actual .text section1 the most: alignment of branch-targets in general, or just loops, costs some code-size. Other than that:

  • -ftree-vectorize - make SIMD versions loops, also necessitating scalar cleanup if the compiler can't prove that the iteration count will be a multiple of the vector width. (Or that pointed-to arrays are non-overlapping if you don't use restrict; that may also need a scalar fallback). Enabled at -O3 in GCC11 and earlier. Enabled at -O2 in GCC12 and later, like clang.

  • -funroll-loops / -funroll-all-loops - not enabled by default even at -O3 in modern GCC. Enabled with profile-guided optimization (-fprofile-use), when it has profiling data from a -fprofile-generate build to know which loops are actually hot and worth spending code-size on. (And which are cold and thus should be optimized for size so you get fewer I-cache misses when they do run, and less eviction of other code.) PGO also influences vectorization decisions.

    Related to loop unrolling are heuristics (tuning knobs) that control loop peeling (fully unrolling) and how much to unroll. The normal way to set these is with -march=native, implying -mtune= whatever as well. -mtune=znver3 may favour big unroll factors (at least clang does), compared to -mtune=sandybridge or -mtune=haswell. But there are GCC options to manually adjust individual things, as discussed in comments on gcc: strange asm generated for simple loop and in How to ask GCC to completely unroll this loop (i.e., peel this loop)?

    There are options to override the weights and thresholds for other decision heuristics like inlining, too, but it's very rare you'd want to fine-tune that much unless you're working on refining the defaults, or finding good defaults for a new CPU.

  • -Os - optimize for size and speed, trying not to sacrifice too much speed. A good tradeoff if your code has a lot of I-cache misses, otherwise -O3 is normally faster, or at least that's the design goal for GCC. Can be worth trying different options to see if -O2 or -Os make your code faster than -O3 across some CPUs you care about; sometimes missed-optimizations or quirks of certain microarchitectures make a difference, as in Why does GCC generate 15-20% faster code if I optimize for size instead of speed? which has actual benchmarks from GCC4.6 to 4.8 (current at the time) for a specific small loop in a test program, on quite a few different x86 and ARM CPUs, with and without -march=native to actually tune for them. There's zero reason to expect that to be representative of other code, though, so you need to test yourself for your own codebase. (And for any given loop, small code changes could make a different compile option better on any given CPU.)

    And obviously -Os is very useful if you need your static code-size smaller to fit in some size limit.

  • -Oz optimizing for size only, even at a large cost in speed. GCC only very recently added this to current trunk, so expect it in GCC12 or 13. Presumably what I wrote below about clang's implementation of -Oz being quite aggressive also applies to GCC, but I haven't yet tested it.

Clang has similar options, including -Os. It also has a clang -Oz option to optimize only for size, without caring about speed. It's very aggressive, e.g. on x86 using code-golf tricks like push 1; pop rax (3 bytes total) instead of mov eax, 1 (5 bytes).

GCC's -Os unfortunately chooses to use div instead of a multiplicative inverse for division by a constant, costing lots of speed but not saving much if any size. (https://godbolt.org/z/x9h4vx1YG for x86-64). For ARM, GCC -Os still uses an inverse if you don't use a -mcpu= that implies udiv is even available, otherwise it uses udiv: https://godbolt.org/z/f4sa9Wqcj .

Clang's -Os still uses a multiplicative inverse with umull, only using udiv with -Oz. (or a call to __aeabi_uidiv helper function without any -mcpu option). So in that respect, clang -Os makes a better tradeoff than GCC, still spending a little bit of code-size to avoid slow integer division.



Footnote 1: inlining or not for std::vector

#include <vector>
int foo(std::vector<int> &v) {
return v[0] + v[1];
}

Godbolt with gcc with the default -O0 vs. -Os for -mcpu=cortex-m7 just to randomly pick something. IDK if it's normal to use dynamic containers like std::vector on an actual microcontroller; probably not.

# -Os (same as -Og for this case, actually, omitting the frame pointer for this leaf function)
foo(std::vector<int, std::allocator<int> >&):
ldr r3, [r0] @ load the _M_start member of the reference arg
ldrd r0, r3, [r3] @ load a pair of words (v[0..1]) from there into r0 and r3
add r0, r0, r3 @ add them into the return-value register
bx lr

vs. a debug build (with name-demangling enabled for the asm)

# GCC -O0 -mcpu=cortex-m7 -mthumb
foo(std::vector<int, std::allocator<int> >&):
push {r4, r7, lr} @ non-leaf function requires saving LR (the return address) as well as some call-preserved registers
sub sp, sp, #12
add r7, sp, #0 @ Use r7 as a frame pointer. -O0 defaults to -fno-omit-frame-pointer
str r0, [r7, #4] @ spill the incoming register arg to the stack

movs r1, #0 @ 2nd arg for operator[]
ldr r0, [r7, #4] @ reload the pointer to the control block as the first arg
bl std::vector<int, std::allocator<int> >::operator[](unsigned int)
mov r3, r0 @ useless copy, but hey we told GCC not to spend any time optimizing.
ldr r4, [r3] @ deref the reference (pointer) it returned, into a call-preserved register that will survive across the next call

movs r1, #1 @ arg for the v[1] operator[]
ldr r0, [r7, #4]
bl std::vector<int, std::allocator<int> >::operator[](unsigned int)
mov r3, r0
ldr r3, [r3] @ deref the returned reference

add r3, r3, r4 @ v[1] + v[0]
mov r0, r3 @ and copy into the return value reg because GCC didn't bother to add into it directly

adds r7, r7, #12 @ tear down the stack frame
mov sp, r7
pop {r4, r7, pc} @ and return by popping saved-LR into PC

@ and there's an actual implementation of the operator[] function
@ it's 15 instructions long.
@ But only one instance of this is needed for each type your program uses (vector<int>, vector<char*>, vector<my_foo>, etc.)
@ so it doesn't add up as much as each call-site
std::vector<int, std::allocator<int> >::operator[](unsigned int):
push {r7}
sub sp, sp, #12
...

As you can see, un-optimized GCC cares more about fast compile-times than even the most simple things like avoiding useless mov reg,reg instructions even within code for evaluating one expression.



Footnote 1: metadata

If you could a whole ELF executable with metadata, not just the .text + .rodata + .data you'd need to burn to flash, then of course -g debug info is very significant for size of the file, but basically irrelevant because it's not mixed in with the parts that are needed while running, so it just sits there on disk.

Symbol names and debug info can be stripped with gcc -s or strip.

Stack-unwind info is an interesting tradeoff between code-size and metadata. -fno-omit-frame-pointer wastes extra instructions and a register as a frame pointer, leading to larger machine-code size, but smaller .eh_frame stack unwind metadata. (strip does not consider that "debug" info by default, even for C programs not C++ where exception-handling might need it in non-debugging contexts.)

How to remove "noise" from GCC/clang assembly output? mentions how to get the compiler to omit some of that: -fno-asynchronous-unwind-tables omits .cfi directives in the asm output, and thus the metadata that goes into the .eh_frame section. Also -fno-exceptions -fno-rtti with C++ can reduce metadata. (Run-Time Type Information for reflection takes space.)

Linker options that control alignment of sections / ELF segments can also take extra space, relevant for tiny executables but is basically a constant amount of space, not scaling with the size of the program. See also Minimal executable size now 10x larger after linking than 2 years ago, for tiny programs?

How is clang able to steer C/C++ code optimization?

(TL;DontWannaRead - skip to the end of this answer)

To answer your question properly you first need to understand the difference between a compiler's front-end and back-end (especially the first one).

Clang is a compiler front-end (http://en.wikipedia.org/wiki/Clang) for C, C++, Objective C and Objective C++ languages.

Clang's duty is the following:

enter image description here

i.e. translating from C++ source code (or C, or Objective C, etc..) to LLVM IR, a textual lower-level representation of what should that code do. In order to do this Clang employs a number of sub-modules whose descriptions you could find in any decent compiler construction book: lexer, parser + a semantic analyzer (Sema), etc..

LLVM is a set of libraries whose primary task is the following: suppose we have the LLVM IR representation of the following C++ function

int double_this_number(int num) {
int result = 0;
result = num;
result = result * 2;
return result;
}

the core of the LLVM passes should optimize LLVM IR code:

enter image description here

What to do with the optimized LLVM IR code is entirely up to you: you can translate it to x86_64 executable code or modify it and then spit it out as ARM executable code or GPU executable code. It depends on the goal of your project.

The term "back-end" is often confusing since there are many papers that would define the LLVM libraries a "middle end" in a compiler chain and define the "back end" as the final module which does the code generation (LLVM IR to executable code or something else which no longer needs processing by the compiler). Other sources refer to LLVM as a back end to Clang. Either way, their role is clear and they offer a powerful mechanism: whatever the language you're targeting (C++, C, Objective C, Python, etc..) if you have a front-end which translates it to LLVM IR, you can use the same set of LLVM libraries to optimize it and, as long as you have a back-end for your target architecture, you can generate optimized executable code.

Recalling that LLVM is a set of libraries (not just optimization passes but also data structures, utility modules, diagnostic modules, etc..), Clang also leverages many LLVM libraries during its front-ending process. You can't really tear every LLVM module away from Clang since the latter is built on the former set.

As for the reason why Clang is said to be a "compilation driver": Clang manages interpreting the command line parameters (descriptions and many declarations are TableGen'd and they might require a bit more than a simple grep to swim through the sources), decides which Jobs and phases are to be executed, set up the CodeGenOptions according to the desired/possible optimization and transformation levels and invokes the appropriate modules (clangCodeGen in BackendUtil.cpp is the one that populates a module pass manager with the optimizations to apply) and tools (e.g. the Windows ld linker). It steers the compilation process from the very beginning to the end.

Finally I would suggest reading Clang and LLVM documentation, they're pretty explicative and most of your questions should look for an answer there in the first place.

Clang link-time optimization with replaced operator new causes mismatched free()/delete in valgrind

Looking at the object-dump, it is obvious operator delete(void*) is not exported by main.

$ objdump -T main | c++filt | grep operator
0000000000400990 g DF .text 0000000000000033 Base operator new(unsigned long)
0000000000000000 DF *UND* 0000000000000000 Base operator delete(void*)

See that the section where operator delete(void*) is stored is *UND*: It is not there!

Now, that's an obvious failure on clang's part, might make a good bug-report, as we already have a minimal test-case.

Now, how to force clang to keep and export operator delete(void*) as a band-aid?

The answer is looking at the possible attributes, there's a good one:

used
This attribute, attached to a function, means that code must be emitted for the function even if it appears that the function is not referenced. This is useful, for example, when the function is referenced only in inline assembly.
When applied to a member function of a C++ class template, the attribute also means that the function is instantiated if the class itself is instantiated.

Putting that in the code:

void operator delete(void* ptr) noexcept  __attribute__((used)) {

And voilá, clang no longer improperly prunes it.

Strange uses of movzx by Clang and GCC

Both compilers are doing a poor job here, but clang's code is especially bad and has no real upside anywhere. And an easily avoidable downside on everything except Intel CPUs a decade old (which rename low-8 partial registers).

The optimal asm is what you suggest, movzx load, then byte add, leaving a uint8_t result in the low byte, correctly zero-extended to int as required by the C semantics. (Thanks for reporting it upstream: https://github.com/llvm/llvm-project/issues/56498 - I commented there about movzx being a good idea for byte loads in general, even when LLVM doesn't need the result zero-extended.)


A movzx is necessary somewhere, but it can be in the initial load. (A movzx is generally a good idea for a byte load anyway, to avoid a false dependency on the old RAX; clang's choice to save 1 byte is probably not a good one even when it doesn't end up needing a separate movzx right after.)

There are basically three relevant behaviours here, among x86-64 CPUs.

  • Core 2 / Nehalem (the 64-bit capable members of the P6 family): AL renamed separately from RAX if you write AL. A later read of EAX will stall the front-end for about 3 cycles while inserting a merge uop. Less bad than earlier P6-family, but still a significant penalty to avoid. But these CPUs are pretty obsolete, and not something GCC's -mtune=generic should put much weight on for the latest GCC. (Especially given that current nightly GCC's behaviour now won't be get baked into widely used binary packages for another year or more probably, by most stable-release distros.)

    Returning an int when the last instruction wrote al will likely lead to a penalty when the caller reads EAX. But mov al, [rdi] can run without any false dependency or merging cost.

  • Sandybridge and maybe Ivy Bridge: AL still renamed separately, but a merging uop can be inserted without any stalling, in a cycle with other uops.

    mov al, [rdi] still has no false dep or merging uop. But a later read of EAX that triggers a merging uop (to merge the add al result with the high bytes of RAX from movzx eax, [rdi]) will get inserted just as cheaply as if we'd put a movzx eax, al in the machine code. (If the upper bytes of RAX are all zero, merge or extend are equivalent.)

  • Haswell and later (and maybe IvB), and all other x86 vendors, and low-power CPUs from Intel like Silvermont-family: no partial register renaming at all. (Except for AH/BH/CH/DH on Intel SnB-family). The last CPU not in this category is nearly a decade old, and the last CPU with major penalties (P6-family) is over a decade old.

    mov al, [rdi] sucks: false dependency and costs an ALU uop in the back-end to merge. So it's extra load latency in the critical path through whatever stored the memory operand.

    Reading EAX after writing AL has zero penalty; that's not a special case at all; the merging happened when you wrote AL.


GCC's code is a sensible tradeoff between Core2 / Nehalem vs. modern CPUs: load with movzx to avoid a false dep writing a partial reg. And a final movzx to avoid a partial-register stall in the caller.

But if it's going to do that, it could hurt modern Intel less by picking EDX or ECX as the temporary, since Intel can do zero-latency mov-elimination on movzx r32, r8, but not within the same register. It still costs a front-end uop so it's not free for throughput, only latency and back-end ports. This is a persistent missed-optimization; I don't think GCC or clang know to look for that; they commonly zero-extend 32->64 with mov esi,esi on a function arg, for example.

   movzx  edx, byte ptr [rdi]
add dl, [rsi]
movzx eax, dl # mov-elimination possible on IvB and later (except Ice Lake with updated microcode which breaks mov-elim).

If optimizing specifically for Core2 / Nehalem, you'd do this:

   xor   eax, eax      # off the critical path, avoids partial-reg stalls for later reads of EAX after writing AL
mov al, [rdi]
add al, [rsi]

That's not bad on later CPUs, although the mov al, [rdi] would still be a micro-fuse load+ALU uop so it has extra load latency, and takes an extra slot in the scheduler and a cycle on a back-end execution port. So 3 back-end uops, up from 2 in IvB and later with eliminated movzx if you pick different registers.

GCC's choice to use movzx because of Core2/Nehalem is highly conservative at this point; probably -mtune=generic in GCC12 shouldn't care about P6-family partial-register stalls since those CPUs are well over a decade old. Especially in 64-bit code where the worst case is Core2/Nehalem, not the even longer stalls with no merging uop on earlier P6-family. (And 64-bit code is more likely to be run on newer CPUs; one of the use-cases for -m32 is to make code for old 32-bit-only CPUs.)

It might well be an intentional tuning choice that needs updating. It's definitely a missed optimization with -march / -mtune= k8 through znver3, or silvermont-family, or sandybridge or newer.

(Also note that some choices which should differ based on -mtune setting actually don't. GCC just has one way it always does some things, and adding hooks to make it differ based on a tuning flag hasn't been done. Clang is the same way. e.g. -mtune=core2 still doesn't know to avoid partial-register stalls!)


Clang normally lives dangerously writing partial registers and otherwise ignoring false dependencies when they're not visibly loop-carried within a single function (which can bite it in the ass). This can save a whole instruction when it skips xor-zeroing, but saving just 1 byte doesn't seem worth it in general. It's a false dependency and means the mov load decodes to load + ALU merge uops (to merge a new low byte into the existing 64-bit register).

Looks like clang just did its usual thing of loading 8-bit values into 8-bit registers ignoring movzx, then realized at the end it needed to zero-extend the result.

An optimization pass looking for a chance to fold zero-extension (after narrow math) into an earlier load would be useful. And/or otherwise look for ways to prove that values are already zero-extended, if it doesn't do that.

Probably in general better to start doing narrow loads with movzx so that's more normally the case.


You might want to report a missed-optimization bug, especially for clang. Their code-gen is already a huge middle finger toward P6-family most of the time with partial-register usage, so they'd probably be interested in trying to generate the 2-instruction version. https://github.com/llvm/llvm-project/issues

Also https://gcc.gnu.org/bugzilla/enter_bug.cgi?product=gcc (use the keyword missed-optimization for GCC bugs. Feel free to link this stack overflow post, and/or quote any of my comments if you want, as well as a Godbolt link. GCC devs prefer AT&T syntax for x86 discussion / bugs.)

See also:

  • Why doesn't GCC use partial registers?
  • How exactly do partial registers on Haswell/Skylake perform? Writing AL seems to have a false dependency on RAX, and AH is inconsistent
  • https://agner.org/optimize/ (especially his microarch guide re: partial-register details for P6-family CPUs. Last I looked, the guide incorrectly said Haswell doesn't have zero-latency movzx eax, dl, and that AH-merging was free; see my Q&A about HSW/SKL. But Agner's guide is accurate AFAIK for earlier CPUs.)
  • https://uops.info/ (front-end vs. back-end vs. latency costs for different instructions)
  • What is the best way to set a register to zero in x86 assembly: xor, mov or and? - including the part about avoiding partial-register stalls on P6, how xor eax,eax sets some kind of internal EAX=AL flag.


I have 2 more examples here: https://godbolt.org/z/z45xr4hq1 GCC generates both sensible and strange movzx, and Clang's use of mov and movzx just makes no sense to me.

clang's mov ecx, edx zero-extension from 32 to 64 instead of from 8 to 64 is because it depends on an unofficial extension to the x86-64 SysV calling convention, that narrow args are extended to 32-bit. AMD Zen CPUs can do mov-elimination on mov ecx, edx but not for movzx-byte, so this is actually more efficient, as well as saving code-size.

(GCC and clang both make callers that respect this unofficial calling-convention feature, but only clang makes callees that depend on it. ICC doesn't do either so is not ABI-compatible with clang.)

Extension to intptr_t is of course necessary for all narrower args if you're going to index an array with one. (In abstract C terms, this is just part of using the value for pointer math). High garbage is allowed in at least the high 32 bits of the 64-bit register.

Clang vs GCC for my Linux Development project

EDIT:

The gcc guys really improved the diagnosis experience in gcc (ah competition). They created a wiki page to showcase it here. gcc 4.8 now has quite good diagnostics as well (gcc 4.9x added color support). Clang is still in the lead, but the gap is closing.


Original:

For students, I would unconditionally recommend Clang.

The performance in terms of generated code between gcc and Clang is now unclear (though I think that gcc 4.7 still has the lead, I haven't seen conclusive benchmarks yet), but for students to learn it does not really matter anyway.

On the other hand, Clang's extremely clear diagnostics are definitely easier for beginners to interpret.

Consider this simple snippet:

#include <string>
#include <iostream>

struct Student {
std::string surname;
std::string givenname;
}

std::ostream& operator<<(std::ostream& out, Student const& s) {
return out << "{" << s.surname << ", " << s.givenname << "}";
}

int main() {
Student me = { "Doe", "John" };
std::cout << me << "\n";
}

You'll notice right away that the semi-colon is missing after the definition of the Student class, right :) ?

Well, gcc notices it too, after a fashion:

prog.cpp:9: error: expected initializer before ‘&’ token
prog.cpp: In function ‘int main()’:
prog.cpp:15: error: no match for ‘operator<<’ in ‘std::cout << me’
/usr/lib/gcc/i686-pc-linux-gnu/4.3.4/include/g++-v4/ostream:112: note: candidates are: std::basic_ostream<_CharT, _Traits>& std::basic_ostream<_CharT, _Traits>::operator<<(std::basic_ostream<_CharT, _Traits>& (*)(std::basic_ostream<_CharT, _Traits>&)) [with _CharT = char, _Traits = std::char_traits<char>]
/usr/lib/gcc/i686-pc-linux-gnu/4.3.4/include/g++-v4/ostream:121: note: std::basic_ostream<_CharT, _Traits>& std::basic_ostream<_CharT, _Traits>::operator<<(std::basic_ios<_CharT, _Traits>& (*)(std::basic_ios<_CharT, _Traits>&)) [with _CharT = char, _Traits = std::char_traits<char>]
/usr/lib/gcc/i686-pc-linux-gnu/4.3.4/include/g++-v4/ostream:131: note: std::basic_ostream<_CharT, _Traits>& std::basic_ostream<_CharT, _Traits>::operator<<(std::ios_base& (*)(std::ios_base&)) [with _CharT = char, _Traits = std::char_traits<char>]
/usr/lib/gcc/i686-pc-linux-gnu/4.3.4/include/g++-v4/ostream:169: note: std::basic_ostream<_CharT, _Traits>& std::basic_ostream<_CharT, _Traits>::operator<<(long int) [with _CharT = char, _Traits = std::char_traits<char>]
/usr/lib/gcc/i686-pc-linux-gnu/4.3.4/include/g++-v4/ostream:173: note: std::basic_ostream<_CharT, _Traits>& std::basic_ostream<_CharT, _Traits>::operator<<(long unsigned int) [with _CharT = char, _Traits = std::char_traits<char>]
/usr/lib/gcc/i686-pc-linux-gnu/4.3.4/include/g++-v4/ostream:177: note: std::basic_ostream<_CharT, _Traits>& std::basic_ostream<_CharT, _Traits>::operator<<(bool) [with _CharT = char, _Traits = std::char_traits<char>]
/usr/lib/gcc/i686-pc-linux-gnu/4.3.4/include/g++-v4/bits/ostream.tcc:97: note: std::basic_ostream<_CharT, _Traits>& std::basic_ostream<_CharT, _Traits>::operator<<(short int) [with _CharT = char, _Traits = std::char_traits<char>]


Related Topics



Leave a reply



Submit