Large Negative Integer Literals

large negative integer literals

-2147483648, for example, is not an integer literal; it's an expression consisting of a unary - operator applied to the literal 2147483648.

Prior to the new C++ 2011 standard, C++ doesn't require the existence of any type bigger than 32 bits (C++2011 adds long long), so the literal 2147483648 is non-portable.

A decimal integer literal is of the first of the following types in which its value fits:

int
long int
long long int (new in C++ 2011)

Note that it's never of an unsigned type in standard C++. In the 1998 and 2003 versions of the C standard (which don't have long long int), a decimal integer literal that's too big to fit in long int results in undefined behavior. In C++2011, if a decimal integer literal doesn't fit in long long int, then the program is "ill-formed".

But gcc (at least as of release 4.6.1, the latest one I have) doesn't implement the C++2011 semantics. The literal 2147483648, which doesn't fit in a 32-bit long, is treated as unsigned long, at least on my 32-bit system. (That's fine for C++98 or C++2003; the behavior is undefined, so the compiler can do anything it likes.)

So given a typical 32-bit 2's-complement int type, this:

cout << -2147483647 << '\n';

takes the int value 2147483647, negates it, and prints the result, which matches the mathematical result you'd expect. But this:

cout << -2147483648 << '\n';

(when compiled with gcc 4.6.1) takes the long or unsigned long value 2147483648, negates it as an unsigned int, yielding 2147483648, and prints that.

As others have mentioned, you can use suffixes to force a particular type.

Here's a small program that you can use to show how your compiler treats literals:

#include <iostream>
#include <climits>

const char *type_of(int) { return "int"; }
const char *type_of(unsigned int) { return "unsigned int"; }
const char *type_of(long) { return "long"; }
const char *type_of(unsigned long) { return "unsigned long"; }
const char *type_of(long long) { return "long long"; }
const char *type_of(unsigned long long) { return "unsigned long long"; }

int main()
{
std::cout << "int: " << INT_MIN << " .. " << INT_MAX << "\n";
std::cout << "long: " << LONG_MIN << " .. " << LONG_MAX << "\n";
std::cout << "long long: " << LLONG_MIN << " .. " << LLONG_MAX << "\n";

std::cout << "2147483647 is of type " << type_of(2147483647) << "\n";
std::cout << "2147483648 is of type " << type_of(2147483648) << "\n";
std::cout << "-2147483647 is of type " << type_of(-2147483647) << "\n";
std::cout << "-2147483648 is of type " << type_of(-2147483648) << "\n";
}

When I compile it, I get some warnings:

lits.cpp:18:5: warning: this decimal constant is unsigned only in ISO C90
lits.cpp:20:5: warning: this decimal constant is unsigned only in ISO C90

and the following output, even with gcc -std=c++0x:

int: -2147483648 .. 2147483647
long: -2147483648 .. 2147483647
long long: -9223372036854775808 .. 9223372036854775807
2147483647 is of type int
2147483648 is of type unsigned long
-2147483647 is of type int
-2147483648 is of type unsigned long

I get the same output with VS2010, at least with default settings.

Why does the most negative int value cause an error about ambiguous function overloads?

This is a very subtle error. What you are seeing is a consequence of there being no negative integer literals in C++. If we look at [lex.icon] we get that a integer-literal,

integer-literal
        decimal-literal integer-suffixopt
        [...]

can be a decimal-literal,

decimal-literal:
        nonzero-digit
        decimal-literal ’ opt digit

where digit is [0-9] and nonzero-digit is [1-9] and the suffix par can be one of u, U, l, L, ll, or LL. Nowhere in here does it include - as being part of the decimal literal.

In §2.13.2, we also have:

An integer literal is a sequence of digits that has no period or exponent part, with optional separating single quotes that are ignored when determining its value. An integer literal may have a prefix that specifies its base and a suffix that specifies its type. The lexically first digit of the sequence of digits is the most significant. A decimal integer literal (base ten) begins with a digit other than 0 and consists of a sequence of decimal digits.

(emphasis mine)

Which means the - in -2147483648 is the unary operator -. That means -2147483648 is actually treated as -1 * (2147483648). Since 2147483648 is one too many for your int it is promoted to a long int and the ambiguity comes from that not matching.

If you want to get the minimum or maximum value for a type in a portable manner you can use:

std::numeric_limits<type>::min();  // or max()

C warning(clang compiler) integer literal is too large to be represented in a signed integer

The problem is that -9223372036854775808LL is not actually an integer literal. It's the literal 9223372036854775808LL with the unary - operator applied to it. The value 9223372036854775808 is too large to fit in a signed long long which is why you're getting the warning.

You can fix this by using the expression -9223372036854775807LL - 1LL instead. The value 9223372036854775807 fits in a signed long long as does -9223372036854775807LL, then subtracting 1 gives you the value you want.

Alternately, you can use the macro LLONG_MIN.

How do I mute this error : integer literal is too large to be represented in a signed integer type

Implicitly, the integer literal is signed and of course the values are too big for a signed long long, so you need to let the compiler know that they have type unsigned, like this

mask_left = 9259542123273814143U;
mask_top = 18374686479671623680U;

Comparing unsigned integer with negative literals

This is covered in C classes and is specified in the documentation. Here is how you use documents to figure this out.

In the 2018 C standard, you can look up > or “relational exprssions” in the index to see they are discussed on pages 68-69. On page 68, you will find clause 6.5.8, which covers relational operators, including >. Reading it, paragraph 3 says:

If both of the operands have arithmetic type, the usual arithmetic conversions are performed.

“Usual arithmetic conversions” is listed in the index as defined on page 39. Page 39 has clause 6.3.1.8, “Usual arithmetic conversions.” This clause explains that operands of arithmetic types are converted to a common type, and it gives rules determining the common type. For two integer types of different signedness, such as the unsigned long and the long int in bar (a and -2L), it says that, if the unsigned type has rank greater than or equal to the rank of the other type, the signed type is converted to the unsigned type.

“Rank” is not in the index, but you can search the document to find it is discussed in clause 6.3.1.1, where it tells you the rank of long int is greater than the rank of int, and the any unsigned type has the same rank as the corresponding type.

Now you can consider a > -2L in bar, where a is unsigned long. Here we have an unsigned long compared with a long. They have the same rank, so -2L is converted to unsigned long. Conversion of a signed integer to unsigned is discussed in clause 6.3.1.3. It says the value is converted by wrapping it modulo ULONG_MAX+1, so the signed long −2 produces a large integer. Then comparing a, which has the value 99, to a large integer with > yields false, so zero is returned.

For foo, we continue with the rules for the usual arithmetic conversions. When the unsigned type does not have rank greater than or equal to the rank of the signed type, but the signed type can represent all the values of the type of the operand with unsigned type, the operand with the unsigned type is converted to the operand of the signed type. In foo, a is unsigned int and -2L is long int. Presumably in your C implementation, long int is 64 bits, so it can represent all the values of a 32-bit unsigned int. So this rule applies, and a is converted to long int. This does not change the value. So the original value of a, 99, is compared to −2 with >, and this yields true, so one is returned.

On type of a literal, unsigned negative number

You are assigning the unsigned int back to a signed int, so it gets converted again.

It's like you did this:

int i = (int)(unsigned int)(-12);

C++ Primer paragraph on Integer literals, need someone to clarify some points

If you have an integer literal for example a decimal integer literal the compiler has to define its type. For example a decimal literal can be used in expressions and the compiler need to determine the type of an expression based on the types of its operands.

So for decimal integer literals the compiler selects between the following types

int
long int
long long int

and choices the first type that can accomodate the decimal literal.

It does not consider unsigned integer types as for example unsigned int or unsigned long int though they could accomodate a given literal.

The situation is different when the compiler deals with octal or hexadecimal integer literals. In this case it considers the following types in the given order

int
unsigned int
long int
unsigned long int
long long int
unsigned long long int

That it would be more clear consider an artificial example to demonstrate the idea. Let's assume that you have a value equal to 127. This value can be stored in type signed char. Now what about value 128? It can not be stored in an object of type signed char because the maximum positive value that can be stored in an object of type signed char is 127.

What to do? We could store 128 in an object of type unsigned char because its maximum value is 255. However the compiler prefers to store it in an object of type signed short.

But if this value was specified like 0x80 then the compiler would select an object of type unsigned char

It is of course an imaginary process.

However in realty a similar algorithm is used for decimal literals only the compiler takes into account integer types starting from int that to determine the type of a decimal literal.

Why does a negative NSInteger (long) value become garbage when sent through variadic arguments?

Because you're passing integer literals as the arguments, so in the case of 2 and -2, these will be passed as ints. So you're invoking undefined behaviour by trying to read an NSInteger.

To solve this, use an explicit cast:

test(2, (NSInteger)-2);

Negative result due to overflow despite using long long int

You either need to declare one of the variables you're multiplying as a long or you need to cast it to long during the multiplication.

Declaring 1 variable as a long:
long A = 200;
int B = 400, C=150, D=210, ...

Declaring all variables as long:
long A = 200, B = 400, C=150, D=210, ...

Casting variable A to long during multiplication:
non_Zero_value_number=(percent*((long) ABC*D))/100;

The problem is that C++ is doing all of the calculations in int before turning the end-result into a long. Which means that you still have to deal with the int overflow once the int value exceeds 2147483647 (see https://learn.microsoft.com/en-us/cpp/c-language/cpp-integer-limits)

Since 10 x 200 x 400 x 150 x 210 = 25200000000, you are exceeding the limits of int by several orders of magnitude

Note: This question is very similar to the one in the link (although for C++ instead of C#):
Multiplying int with long result c#



Related Topics



Leave a reply



Submit