Is There a Difference Between I==0 and 0==I

Is there a difference between i==0 and 0==i?

Functionally, there is no difference.

Some developers prefer writing the second format to avoid assignment typos(in case you miss a =), so that compiler warns of the typo.

The second is famously known as Yoda Condition.

Yoda Condition

I say there is no difference because, you cannot guard yourself against every minuscule detail and rely on compiler to cry out aloud for you.If you intend to write a == you should expect yourself to write a == and not a =.

Using the second format just leads to some obscure non-readable code.

Also, most of the mainstream compilers warn of the assignment instead of equality typo by emitting an warning once you enable all the warnings(which you should anyways).

Difference in the two ways of checking equality i==0 vs 0==i in C++

The former has the advantage that it prevents the programmer from accidentally leaving out an equals sign.

if (i = 0)

The above is legal, and sets i to zero, and is false (since zero is considered false).

if (0 = i)

The above is illegal in all contexts.

Today, most compilers will warn about if (i = 0), but it is not a requirement, and they didn't always do so.

Difference between while(i=0) and while(i==0)

while (i = 0) will assign the value 0 to i and then check whether the value of the expression (which is the value assigned, i.e. 0) is non-zero. In other words, it won't execute the body of the loop even once... it'll just set i to 0. It'll also raise a warning on any decent compiler, as it's a common typo.

Following the same logic, while (i = 1) would assign the value 1 to i and always execute the loop body... only a break (or exception) within the loop would terminate it.

(Many other languages don't have this issue as widely, as they require an expression of a boolean type for conditions such as while and if. Such languages still often have a problem with while (b = false) though.)

How to check for equals? (0 == i) or (i == 0)

I prefer the second one, (i == 0), because it feel much more natural when reading it. You ask people, "Are you 21 or older?", not, "Is 21 less than or equal to your age?"

What is the difference between ++i and i++?

  • ++i will increment the value of i, and then return the incremented value.

     i = 1;
    j = ++i;
    (i is 2, j is 2)
  • i++ will increment the value of i, but return the original value that i held before being incremented.

     i = 1;
    j = i++;
    (i is 2, j is 1)

For a for loop, either works. ++i seems more common, perhaps because that is what is used in K&R.

In any case, follow the guideline "prefer ++i over i++" and you won't go wrong.

There's a couple of comments regarding the efficiency of ++i and i++. In any non-student-project compiler, there will be no performance difference. You can verify this by looking at the generated code, which will be identical.

The efficiency question is interesting... here's my attempt at an answer:
Is there a performance difference between i++ and ++i in C?

As @OnFreund notes, it's different for a C++ object, since operator++() is a function and the compiler can't know to optimize away the creation of a temporary object to hold the intermediate value.

The difference between 0 and '0' in array

'0' is the ASCII character for the number 0. Its value is 48.

The constant 0 is a zero byte or null byte, also written '\0'.

These four are equivalent:

char a[6] = {0};
char a[6] = {0, 0, 0, 0, 0, 0};
char a[6] = {'\0', '\0', '\0', '\0', '\0', '\0'};
char a[6] = "\0\0\0\0\0"; // sixth null byte added automatically by the compiler

What is the difference between ++ and += 1 operators?

num += 1 is rather equivalent to ++num.

All those expressions (num += 1, num++ and ++num) increment the value of num by one, but the value of num++ is the value num had before it got incremented.

Illustration:

int a = 0;
int b = a++; // now b == 0 and a == 1
int c = ++a; // now c == 2 and a == 2
int d = (a += 1); // now d == 3 and a == 3

Use whatever pleases you. I prefer ++num to num += 1 because it is shorter.

Is there a difference between 0 and -0 in Javascript

Interesting! It seems their values are equal--neither is larger than the other, but they are distinct objects with several side effects (including division by 0 or -0 as per Roisin's answer).

Other interesting quirks observed:

const a = 0;
const b = -0;

a == b; // true
a === b; // true

a < b; // false
b < a; // false

Object.is(a, b); // false
Object.is(a, -b); // true

b.toString(); // "0" <-- loses the negative sign

a + b; // 0
b - a; // -0
a * b; // -0

difference between \0 and '\0'

Double quotes create string literals. So "\0" is a string literal holding the single character '\0', plus a second one as the terminator. It's a silly way to write an empty string ("" is the idiomatic way).

Single quotes are for character literals, so '\0' is an int-sized value representing the character with the encoding value of 0.

Nits in the code:

  • Don't cast the return value of malloc() in C.
  • Don't scale allocations by sizeof (char), that's always 1 so it adds no value.
  • Pointers are not integers, you should compare against NULL typically.
  • The entire structure of the code makes no sense, there's an allocation in a loop but the pointer is thrown away, leaking lots of memory.

Is there a performance difference between int i =0 and int i = default(int)?

No, they are resolved at compile time and produce the same IL. Value types will be 0 (or false if you have a bool, but that's still 0) and reference types are null.



Related Topics



Leave a reply



Submit