Why Is −1 > Sizeof(Int)

Why is −1 sizeof(int)?

The following is how standard (ISO 14882) explains abort -1 > sizeof(int)

Relational operator `>' is defined in 5.9 (expr.rel/2)

The usual arithmetic conversions are
performed on operands of arithmetic or
enumeration type. ...

The usual arithmetic conversions is defined in 5 (expr/9)

... The pattern is called the usual arithmetic conversions, which are defined as following:

  • If either operand is of type long
    double, ...
  • Otherwise, if either operand is dobule, ...
  • Otherwise, if either operand is float, ...
  • Otherwise, the integral promotions shall be performed on both operands.
  • ...

The integral promotions is defined in 4.5 (conv.prom/1)

An rvalue of type char, signed char,
unsigned char, short int, or unsigned
short int can be converted to an
rvalue of type int if int can
represent all the values of the source
type; otherwise, the source rvalue can
be converted to an rvalue of type
unsigned int.

The result of sizeof is defined in 5.3.3 (expr.sizeof/6)

The result is a constant of type
size_t

size_t is defined in C standard (ISO 9899), which is unsigned integer type.

So for -1 > sizeof(int), the > triggers usual arithmetic conversions. The usual arithmetic conversion converts -1 to unsigned int because int cannot represent all the value of size_t. -1 becomes a very large number depend on platform. So -1 > sizeof(int) is true.

Why is sizeof(int) less than -1?

sizeof generates a size_t which is always positive. You are comparing it with -1 which is probably promoted in size_t which gave you a HUGE number, most likely greater than the size of an int.

To make sure of it, try this:

printf("%zu\n", sizeof(int));
printf("%zu\n", (size_t)(-1));

[EDIT]: Following comments (some have been removed), I precise indeed that sizeof is an operator, not a function.

What is the reason behind the False output of this code?

Your sizeof(int) > -1 test is comparing two unsigned integers. This is because the sizeof operator returns a size_t value, which is of unsigned type, so the -1 value is converted to its 'equivalent' representation as an unsigned value, which will actually be the largest possible value for an unsigned int.

To fix this, you need to explicitly cast the sizeof value to a (signed) int:

    if ((int)sizeof(int) > -1) {
printf("True");
}

According to the C standard, what is the size of an int?

My question is what, if anything, does the C standard have to say about the size of an int?

A ‘‘plain’’ int object has the natural size suggested by the
architecture of the execution environment (large enough to contain any value in the range
INT_MIN to INT_MAX as defined in the header <limits.h>).

and

Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign.

— minimum value for an object of type int
INT_MIN -32767 // −(2^15 − 1)

— maximum value for an object of type int
INT_MAX +32767 // 2^15 − 1

so you can't have an 8-bit int on a compliant implementation.



Related Topics



Leave a reply



Submit