Which One Will Execute Faster, If (Flag==0) or If (0==Flag)

Which one will execute faster, if (flag==0) or if (0==flag)?

I haven't seen any correct answer yet (and there are already some) caveat: Nawaz did point out the user-defined trap. And I regret my hastily cast upvote on "stupidest question" because it seems that many did not get it right and it gives room for a nice discussion on compiler optimization :)

The answer is:

What is flag's type?

In the case where flag actually is a user-defined type. Then it depends on which overload of operator== is selected. Of course it can seem stupid that they would not be symmetric, but it's certainly allowed, and I have seen other abuses already.

If flag is a built-in, then both should take the same speed.

From the Wikipedia article on x86, I'd bet for a Jxx instruction for the if statement: perhaps a JNZ (Jump if Not Zero) or some equivalent.

I'd doubt the compiler misses such an obvious optimization, even with optimizations turned off. This is the type of things for which Peephole Optimization is designed for.

EDIT: Sprang up again, so let's add some assembly (LLVM 2.7 IR)

int regular(int c) {
if (c == 0) { return 0; }
return 1;
}

int yoda(int c) {
if (0 == c) { return 0; }
return 1;
}

define i32 @regular(i32 %c) nounwind readnone {
entry:
%not. = icmp ne i32 %c, 0 ; <i1> [#uses=1]
%.0 = zext i1 %not. to i32 ; <i32> [#uses=1]
ret i32 %.0
}

define i32 @yoda(i32 %c) nounwind readnone {
entry:
%not. = icmp ne i32 %c, 0 ; <i1> [#uses=1]
%.0 = zext i1 %not. to i32 ; <i32> [#uses=1]
ret i32 %.0
}

Even if one does not know how to read the IR, I think it is self explanatory.

if flag = 0, then which will be the most preferable statement to execute the true or false block? if(!flag) or if(flag == 0)

There are almost no technical differences between the two versions:

  • ! integer promotes its operand and returns an int, 1 or 0.
  • == balances both operands according to usual arithmetic conversions (including integer promotion), then returns an int, 1 or 0.

In most cases you'll get identical machine code. So this is mostly a matter of coding style and therefore subjective.

As a rule of thumb, whatever you pass to if should be regarded as if it was a boolean type. C doesn't have integrated boolean types in the language, hence the above operators return int and not bool, which would make more sense and how C++ works. C uses int for backwards-compatibility reasons.

We can however write C code as if the language had proper boolean integration - pretend that it does. That's a stance recommended by various coding standards like MISRA-C. But in this specific case, that wouldn't affect the code either. You could swap your flag for standard C bool with value true/false, but it would still be fine to use either !flag or flag==false.

For the sake of readability alone, I'd rewrite your example to this:

#include <stdio.h>
#include <stdbool.h>

void search (int key)
{
bool found = false;
int i;

for(i=0; i<MAX && !found; i++)
{
if(key == stack[i])
{
found = true;
}
}

if(found)
printf("%d found at location %d.\n", key, i);
else
printf("%d not found in the stack.\n", key);
}

The resulting machine code should be identical. The main readability issue was the lack of meaningful variable names.

What's the advantage of using if(0 == foo()) rather than (foo() == 0)?

No I think the best reason to do it like this:

0 == foo

is to make sure you don't forget one = which would make it

if (0 = foo)

which will usually raise a compiler error rather than

if (foo = 0)

which creates a hard-to-find bug.

Which fragment will execute faster / generate fewer lines of code? (C++ / JavaScript)

Local variables in JavaScript are faster because The further into the chain, the slower the resolution.

Is if(A | B) always faster than if(A || B)?

Is if(A | B) always faster than if(A || B)?

No, if(A | B) is not always faster than if(A || B).

Consider a case where A is true and the B expression is a very expensive operation. Not doing the operation can save that expense.

So the question is why don't we always use | instead of || in branches?

Besides the cases where the logical or is more efficient, the efficiency is not the only concern. There are often operations that have pre-conditions, and there are cases where the result of the left hand operation signals whether the pre-condition is satisfied for the right hand operation. In such case, we must use the logical operator.

if (b1[i])  // maybe this exists somewhere in the program
b2 = nullptr;

if(b1[i] || b2[i]) // OK
if(b1[i] | b2[i]) // NOT OK; indirection through null pointer

It is this possibility that is typically the problem when the optimiser is unable to replace logical with bitwise. In the example of if(b1[i] || b2[i]), the optimiser can do such replacement only if it can prove that b2 is valid at least when b1[i] != 0. That condition might not exist in your example, but that doesn't mean that it would necessarily be easy or - sometimes even possible - for the optimiser to prove that it doesn't exist.


Furthermore, there can be a dependency on the order of the operations, for example if one operand modifies a value read by the other operation:

if(a || (a = b)) // OK
if(a | (a = b)) // NOT OK; undefined behaviour

Also, there are types that are convertible to bool and thus are valid operands for ||, but are not valid operators for |:

if(ptr1 || ptr2) // OK
if(ptr1 | ptr2) // NOT OK; no bitwise or for pointers

TL;DR If we could always use bitwise or instead of logical operators, then there would be no need for logical operators and they probably wouldn't be in the language. But such replacement is not always a possibility, which is the reason why we use logical operators, and also the reason why optimiser sometimes cannot use the faster option.

Which is faster: != or

For primitive types, both operations take the exact same amount of time since both are actually determined regardless of which you ask for.

In short, whenever you make a basic comparison, < <= > >= == or !=, one side of the operator is subtracted from the other. The result of the subtraction is then used to set a number of flags, the most important of which are Z (zero), N (negative), and O (overflow). Based on the names, you should be able to figure out what each flag represents. Ex: if the result of subtraction is zero, than the Z flag is set. Thus, whether you ask for <= or !=, all the processor is doing is checking the flags which have all been set appropriately as a result of the initial subtraction.

Theoretically, <= should take slightly longer since two flags (Z and N) must be checked instead of one (= just cares about Z). But this happens on such a low level that the results are most likely negligible even on a microsecond scale.

If you're really interested, read up on processor status registers.

For non-primitive types, i.e. classes, it depends on the implementation of the relational operators.



Related Topics



Leave a reply



Submit