Why Does the Most Negative Int Value Cause an Error About Ambiguous Function Overloads

Why does the most negative int value cause an error about ambiguous function overloads?

This is a very subtle error. What you are seeing is a consequence of there being no negative integer literals in C++. If we look at [lex.icon] we get that a integer-literal,

integer-literal
        decimal-literal integer-suffixopt
        [...]

can be a decimal-literal,

decimal-literal:
        nonzero-digit
        decimal-literal ’ opt digit

where digit is [0-9] and nonzero-digit is [1-9] and the suffix par can be one of u, U, l, L, ll, or LL. Nowhere in here does it include - as being part of the decimal literal.

In §2.13.2, we also have:

An integer literal is a sequence of digits that has no period or exponent part, with optional separating single quotes that are ignored when determining its value. An integer literal may have a prefix that specifies its base and a suffix that specifies its type. The lexically first digit of the sequence of digits is the most significant. A decimal integer literal (base ten) begins with a digit other than 0 and consists of a sequence of decimal digits.

(emphasis mine)

Which means the - in -2147483648 is the unary operator -. That means -2147483648 is actually treated as -1 * (2147483648). Since 2147483648 is one too many for your int it is promoted to a long int and the ambiguity comes from that not matching.

If you want to get the minimum or maximum value for a type in a portable manner you can use:

std::numeric_limits<type>::min();  // or max()

literal `0` being a valid candidate for int and const string& overloads causes ambiguous call

0 is special in C++. A null pointer has the value of 0 so C++ will allow the conversion of 0 to a pointer type. That means when you call

a.f(0);

You could be calling void f(int i = 0) const with an int with the value of 0, or you could call void f(const std::string&) with a char* initialized to null.

Normally the int version would be better since it is an exact match but in this case the int version is const, so it requires "converting" a to a const CppSyntaxA, where the std::string version does not require such a conversion but does require a conversion to char* and then to std::string. This is considered enough of a change in both cases to be considered an equal conversion and thus ambiguous. Making both functions const or non const will fix the issue and the int overload will be chosen since it is better.

Why is it ambiguous to call overloaded ambig(long) and ambig(unsigned long) with an integer literal?

You're passing an int to this overloaded function.

Although human intuition says that ambig(signed long) ought to be preferred because your input is a negative integer (which cannot be represented as such by an unsigned long), the two conversions are in fact equivalent in "precedence" in C++.

That is, the conversion intunsigned long is considered to be just as valid as intsigned long, and neither is preferred to the other.

On the other hand, if your parameter were already a long rather than an int, then there is an exact match to signed long, with no conversion necessary. This avoids the ambiguity.

void ambig(  signed long) { }
void ambig(unsigned long) { }

int main(void) { ambig(static_cast<long>(-1)); return 0; }

"Just one of those things".


[C++11: 4.13/1]: ("Integer conversion rank")

Every integer type has an integer conversion rank defined as follows:

  • [..]
  • The rank of a signed integer type shall be greater than the rank of any signed integer type with a smaller size.
  • The rank of long long int shall be greater than the rank of long int, which shall be greater than the rank of int, which shall be greater than the rank of short int, which shall be greater than the rank of signed char.
  • The rank of any unsigned integer type shall equal the rank of the corresponding signed integer type.
  • [..]

[ Note: The integer conversion rank is used in the definition of the integral promotions (4.5) and the usual arithmetic conversions (Clause 5). —end note ]

Overload resolution is complex, and is defined in [C++11: 13.3]; I shan't bore you by quoting a majority of it here.

Here's a highlight, though:

[C++11: 13.3.3.1/8]: If no conversions are required to match an argument to a parameter type, the implicit conversion sequence is the standard conversion sequence consisting of the identity conversion (13.3.3.1.1).

[C++11: 13.3.3.1/9]: If no sequence of conversions can be found to convert an argument to a parameter type or the conversion is otherwise ill-formed, an implicit conversion sequence cannot be formed.

[C++11: 13.3.3.1/10]: If several different sequences of conversions exist that each convert the argument to the parameter type, the implicit conversion sequence associated with the parameter is defined to be the unique conversion sequence designated the ambiguous conversion sequence. For the purpose of ranking implicit conversion sequences as described in 13.3.3.2, the ambiguous conversion sequence is treated as a user-defined sequence that is indistinguishable from any other user-defined conversion sequence134. If a function that uses the ambiguous conversion sequence is selected as the best viable function, the call will be ill-formed because the conversion of one of the arguments in the call is ambiguous.

  • /10 is the case you're experiencing; /8 is the case you use with a long argument.

C++: Weird negative integer output, most likely due to an error in the loop, but I can't seem to notice it

You have to initialize rang1do2k, rang2do3k etc. with 0 value before your loop.
Then, perhaps you don't go into any other if else than this with range 4000-4999, so all other counter will have 0 values.

int rang1do2k = 0;

Why is it ambiguous to call overloaded ambig(long) and ambig(unsigned long) with an integer literal?

You're passing an int to this overloaded function.

Although human intuition says that ambig(signed long) ought to be preferred because your input is a negative integer (which cannot be represented as such by an unsigned long), the two conversions are in fact equivalent in "precedence" in C++.

That is, the conversion intunsigned long is considered to be just as valid as intsigned long, and neither is preferred to the other.

On the other hand, if your parameter were already a long rather than an int, then there is an exact match to signed long, with no conversion necessary. This avoids the ambiguity.

void ambig(  signed long) { }
void ambig(unsigned long) { }

int main(void) { ambig(static_cast<long>(-1)); return 0; }

"Just one of those things".


[C++11: 4.13/1]: ("Integer conversion rank")

Every integer type has an integer conversion rank defined as follows:

  • [..]
  • The rank of a signed integer type shall be greater than the rank of any signed integer type with a smaller size.
  • The rank of long long int shall be greater than the rank of long int, which shall be greater than the rank of int, which shall be greater than the rank of short int, which shall be greater than the rank of signed char.
  • The rank of any unsigned integer type shall equal the rank of the corresponding signed integer type.
  • [..]

[ Note: The integer conversion rank is used in the definition of the integral promotions (4.5) and the usual arithmetic conversions (Clause 5). —end note ]

Overload resolution is complex, and is defined in [C++11: 13.3]; I shan't bore you by quoting a majority of it here.

Here's a highlight, though:

[C++11: 13.3.3.1/8]: If no conversions are required to match an argument to a parameter type, the implicit conversion sequence is the standard conversion sequence consisting of the identity conversion (13.3.3.1.1).

[C++11: 13.3.3.1/9]: If no sequence of conversions can be found to convert an argument to a parameter type or the conversion is otherwise ill-formed, an implicit conversion sequence cannot be formed.

[C++11: 13.3.3.1/10]: If several different sequences of conversions exist that each convert the argument to the parameter type, the implicit conversion sequence associated with the parameter is defined to be the unique conversion sequence designated the ambiguous conversion sequence. For the purpose of ranking implicit conversion sequences as described in 13.3.3.2, the ambiguous conversion sequence is treated as a user-defined sequence that is indistinguishable from any other user-defined conversion sequence134. If a function that uses the ambiguous conversion sequence is selected as the best viable function, the call will be ill-formed because the conversion of one of the arguments in the call is ambiguous.

  • /10 is the case you're experiencing; /8 is the case you use with a long argument.

Why calling the operator as a function causes ambiguous call compiler error?

Because there is no global (or non-member) operator<< overloaded function for integers. It's a member of the output stream:

std::cout.operator<<(x);

A good compiler should have shown you the possible alternatives for an ambiguous call, which should have told you this.

Is there an isdigit function (overload) which includes the negative symbol?

No there isn't. A digit is one thing that comprises a representation of a number, along with the radix, sign, decimal separator, thousands separator, and perhaps other things such as exponentation.

If - was emitted into isdigit then you could expect some folk to want + to be included too, and perhaps even ( and ) to keep the accountants happy.

So in summary, testing if something is a representation of a number is an entirely different beast to testing if a particular character is a digit.

Ambiguous call to overloaded static function

The reason for this is because C++ specifies that this is ambiguous. Overload resolution specifies that for A::a, since this is not in scope, the argument list in that call is augmented by a contrived A object argument, instead of *this. Overload resolution does not exclude non-static member functions, but instead

If the argument list is augmented by a contrived object and overload resolution selects one of the non-static member functions of T, the call is ill-formed.

This has recently been subject of extensive discussion both in the committee in context of core issue 1005. See core issue 364 which considered changing this rule but didn't do so.

Why can integer type int64_t not hold this legal value?

You may write

int64_t a = -1 - 9223372036854775807LL;

The problem is that the - is not part of the literal, it is unary minus. So the compiler first sees 9223372036854775808LL (out of range for signed int64_t) and then finds the negative of this.

By applying binary minus, we can use two literals which are each in range.



Related Topics



Leave a reply



Submit