(-2147483648≫ 0) Returns True in C++

(-2147483648> 0) returns true in C++?

-2147483648 is not a "number". C++ language does not support negative literal values.

-2147483648 is actually an expression: a positive literal value 2147483648 with unary - operator in front of it. Value 2147483648 is apparently too large for the positive side of int range on your platform. If type long int had greater range on your platform, the compiler would have to automatically assume that 2147483648 has long int type. (In C++11 the compiler would also have to consider long long int type.) This would make the compiler to evaluate -2147483648 in the domain of larger type and the result would be negative, as one would expect.

However, apparently in your case the range of long int is the same as range of int, and in general there's no integer type with greater range than int on your platform. This formally means that positive constant 2147483648 overflows all available signed integer types, which in turn means that the behavior of your program is undefined. (It is a bit strange that the language specification opts for undefined behavior in such cases, instead of requiring a diagnostic message, but that's the way it is.)

In practice, taking into account that the behavior is undefined, 2147483648 might get interpreted as some implementation-dependent negative value which happens to turn positive after having unary - applied to it. Alternatively, some implementations might decide to attempt using unsigned types to represent the value (for example, in C89/90 compilers were required to use unsigned long int, but not in C99 or C++). Implementations are allowed to do anything, since the behavior is undefined anyway.

As a side note, this is the reason why constants like INT_MIN are typically defined as

#define INT_MIN (-2147483647 - 1)

instead of the seemingly more straightforward

#define INT_MIN -2147483648

The latter would not work as intended.

Why does the smallest int, −2147483648, have type 'long'?

In C, -2147483648 is not an integer constant. 2147483648 is an integer constant, and - is just a unary operator applied to it, yielding a constant expression. The value of 2147483648 does not fit in an int (it's one too large, 2147483647 is typically the largest integer) and thus the integer constant has type long, which causes the problem you observe. If you want to mention the lower limit for an int, either use the macro INT_MIN from <limits.h> (the portable approach) or carefully avoid mentioning 2147483648:

printf("PRINTF(d) \t: %d\n", -1 - 2147483647);

C++ program giving -2147483648 output

FACTime[2] is never initialized and neither is most of priority[].

In your first loop, you set priority[TPri] only if i > 5, and you set TPri=i-6 which can therefore only be 0. So priority[0] is set, but nothing else. In your second loop, FACTime[2] is set only if priority[2] evaluates to something in your if/else blocks, but it doesn't because priority[2] is uninitialized.

If each element in the priority array is intended to correspond to a single activity as entered by the user, you probably wanted something like:

for (int f=0; f<A; f++)
{
cout << NumActivities[6][0];
cin >> Name[f];

cout << NumActivities[6][1];
cin >> priority[f];
}

Why does MSVC pick a long long as the type for -2147483648?

Contrary to popular belief, -2147483648 is not a literal: C++ does not support negative literal values.

It is, in fact, a compile-time evaluable constant expression consisting of a unary negation of the literal 2147483648.

On MSVC x64, which has 32 bit ints and longs, 2147483648 is too big for either of those so it fails over to the long long type that you observe.

Why is 0 < -0x80000000?

This is quite subtle.

Every integer literal in your program has a type. Which type it has is regulated by a table in 6.4.4.1:

Suffix      Decimal Constant    Octal or Hexadecimal Constant

none int int
long int unsigned int
long long int long int
unsigned long int
long long int
unsigned long long int

If a literal number can't fit inside the default int type, it will attempt the next larger type as indicated in the above table. So for regular decimal integer literals it goes like:

  • Try int
  • If it can't fit, try long
  • If it can't fit, try long long.

Hex literals behave differently though! If the literal can't fit inside a signed type like int, it will first try unsigned int before moving on to trying larger types. See the difference in the above table.

So on a 32 bit system, your literal 0x80000000 is of type unsigned int.

This means that you can apply the unary - operator on the literal without invoking implementation-defined behavior, as you otherwise would when overflowing a signed integer. Instead, you will get the value 0x80000000, a positive value.

bal < INT32_MIN invokes the usual arithmetic conversions and the result of the expression 0x80000000 is promoted from unsigned int to long long. The value 0x80000000 is preserved and 0 is less than 0x80000000, hence the result.

When you replace the literal with 2147483648L you use decimal notation and therefore the compiler doesn't pick unsigned int, but rather tries to fit it inside a long. Also the L suffix says that you want a long if possible. The L suffix actually has similar rules if you continue to read the mentioned table in 6.4.4.1: if the number doesn't fit inside the requested long, which it doesn't in the 32 bit case, the compiler will give you a long long where it will fit just fine.

Why does 0 | | {} extends 0 | ? true : false returns false

The keyword was chosen to match ES6 classes: class SubClass extends BaseClass... which means SubClass is a more specialized type.

Knowing this, one problem then become clear:
0 | "" | {} is not a more specialized type than 0 | "". Instead of narrowing it down, you added more possibilities for it to handle. So it should indeed return false.

You can test the idea of "MoreSpecific extends LessSpecific", for example: 0 | "" extends 0 | "" | {} ? true : false (result: true).

Another approach is to think about assignments.

let variable: BaseClass = some_instance_of_SubClass should be a valid assignment.

But if you have a value known to be of type 0 | "" | {} and try let myVar1: 0 | "" = value; you'll find that it's not assignable, as none of the members of the union type 0 | "" allows you to assign {}.

As for 0 | "" | {} extends 0 | {} leading to true, it’s because of some peculiarities of the empty object type.
You can do such assignments:

let test1: {} = ""
let test2: {} = 34
let test3: {} = {"anything you like": "other than null or undefined"}

(it is apparently done for some historical purposes, see https://github.com/microsoft/TypeScript/issues/44520 “It's a concession to back-compat because { } was the top type when generics were introduced”)

Since you can assign anything other than null or undefined to {}, anything other than null or undefined extends {}.

0 | "" | {} is assignable to type {} as none of the values are null/undefined. Therefore, 0 | "" | {} is assignable to type 0 | {}. So "... extends ..." should return true.

The 'extra possibility' of "" is already covered by the {} part of 0 | {}.



Related Topics



Leave a reply



Submit