Why Does Msvc Pick a Long Long as the Type for -2147483648

long long value in Visual Studio

This is a really nasty little problem which has three(!) causes.

Firstly there is a problem that floating point arithmetic is approximate. If the compiler picks a pow function returning float or double, then 4**31 is so large that 5 is less than 1ULP (unit of least precision), so adding it will do nothing (in other words, 4.0**31+5 == 4.0**31). Multiplying by -2 can be done without loss, and the result can be stored in a long long without loss as the wrong answer: -9.223.372.036.854.775.808.

Secondly, a standard header may include other standard headers, but is not required to. Evidently, Visual Studio's version of <iostream> includes <math.h> (which declares pow in the global namespace), but Code::Blocks' version doesn't.

Thirdly, the OP's pow function is not selected because he passes arguments 4, and 31, which are both of type int, and the declared function has arguments of type unsigned. Since C++11, there are lots of overloads (or a function template) of std::pow. These all return float or double (unless one of the arguments is of type long double - which doesn't apply here).

Thus an overload of std::pow will be a better match ... with a double return values, and we get floating point rounding.

Moral of the story: Don't write functions with the same name as standard library functions, unless you really know what you are doing!

Why did Microsoft abandon long double data type?

I'm not sure why you think that long double was "abandoned", as it is part of the C++ Standard and therefore a compliant implementation must, well, implement it.

What they did "abandon" is long double overloads of mathematical functions, and they did this because:

In Win32 programming, however, the long double data type maps to the double, 64-bit precision data type.

which, in turn, along with long double in older VS versions being 80-bit, is because:

FP code generation has been switching to the use of SSE/SSE2/SSE3 instruction sets instead of the x87 FP stack since that is what both the AMD and Intel recent and future chip generations are focusing their performance efforts on. These instruction sets only support 32 and 64 bit FP formats.

Still, that they chose not to support these overloads, even with same-sized double and long double types (both could have been made 64-bit), is a shame because they are also part of the C++ Standard. But, well, that's Microsoft for you. Intently stubborn.

[n3290: 26.8]: In addition to the double versions of the math
functions in <cmath>, C++ adds float and long double overloaded
versions of these functions, with the same semantics.

However, although these overloads are essentially deprecated in Visual Studio, they are still available, so you should still be able to use them:

The Microsoft run-time library provides long double versions of the
math functions only for backward compatibility.


Is there any alternative to use? (I would also be happy with an alternative out of the boost library).

It sounds to me like you have been relying on long double to support a specific range of numeric values, and have consequently run into regression issues when that has changed in a different toolchain.

If you have a specific numeric range requirement, use fixed-range integral types. Here you have a few options:

  • stdint.h - a C99 feature that some C++ toolchains support as an extension;
  • stdint.h - a C99 feature that Boost re-implements as a library;
  • cstdint - a C++0x feature that may be of use if you are writing C++0x code.

Why long int has same size as int? Does this modifier works at all?

The reason that MS choose to makelong 32 bits even on a 64-bit system is that the existing Windows API, for historical reasons use a mixture of int and long for similar things, and the expectation is that this is s 32-bit value (some of this goes back to times when Windows was a 16-bit system). So to make the conversion of old code to the new 64-bit architecture, they choose to keep long at 32 bits, so that applications mixing int and long in various places would still compile.

There is nothing in the C++ standard that dictates that a long should be bigger than int (it certainly isn't on most 32-bit systems). All the standard says is that the size of short <= int <= long - and that short is at least 16 bits, if memory serves [not necessarily expressed as "should be at least 16 bits", I think it mentions the range of values].

Long Vs. Int C/C++ - What's The Point?

When writing in C or C++, every datatype is architecture and compiler specific. On one system int is 32, but you can find ones where it is 16 or 64; it's not defined, so it's up to compiler.

As for long and int, it comes from times, where standard integer was 16bit, where long was 32 bit integer - and it indeed was longer than int.

What is the historical context for long and int often being the same size?

From the C99 rationale (PDF) on section 6.2.5:

[...] In the 1970s, 16-bit C (for the
PDP-11) first represented file
information with 16-bit integers,
which were rapidly obsoleted by disk
progress. People switched to a 32-bit
file system, first using int[2]
constructs which were not only
awkward, but also not efficiently
portable to 32-bit hardware.

To solve the problem, the long type
was added to the language, even though
this required C on the PDP-11 to
generate multiple operations to
simulate 32-bit arithmetic. Even as
32-bit minicomputers became available
alongside 16-bit systems, people still
used int for efficiency, reserving
long for cases where larger integers
were truly needed, since long was
noticeably less efficient on 16-bit
systems. Both short and long were
added to C, making short available
for 16 bits, long for 32 bits, and
int as convenient for performance.
There was no desire to lock the
numbers 16 or 32 into the language, as
there existed C compilers for at least
24- and 36-bit CPUs, but rather to
provide names that could be used for
32 bits as needed.

PDP-11 C might have been
re-implemented with int as 32-bits,
thus avoiding the need for long; but
that would have made people change
most uses of int to short or
suffer serious performance degradation
on PDP-11s. In addition to the
potential impact on source code, the
impact on existing object code and
data files would have been worse, even
in 1976. By the 1990s, with an immense
installed base of software, and with
widespread use of dynamic linked
libraries, the impact of changing the
size of a common data object in an
existing environment is so high that
few people would tolerate it, although
it might be acceptable when creating a
new environment. Hence, many vendors,
to avoid namespace conflicts, have
added a 64-bit integer to their 32-bit
C environments using a new name, of
which long long has been the most
widely used. [...]

Long long in c99

If long is already 8 then, why is it necessary to add another long long type? What does this do to the compiler/architecture?

"If long is already 8" is not always true as much code exists that relies on 32-bit long and int as 32 or 16 bits.

Requiring long as 64-bit would break code bases. This is a major concern.


Yet requiring long to remain 32-bit (and no long long) would not make for access to standard 64-bit integers, hence a rationale for long long.

Allowing long as either 32-bit or 64-bit (or others) allows for transition.

Various functions pass in/return long like fseek(), ftell(). They benefit from long being more than 32-bit for large file support.

Recommended practice encourages a wider long: "The types used for size_t and ptrdiff_t should not have an integer conversion rank greater than
that of signed long int unless the implementation supports objects large enough to make this necessary." This relates to memory sizes exceeding 32-bit.


Perhaps in the future an implementation may use int/long/long long/intmax_t as 32/64/128/256 bits.

IAC, I see fixed width types intN_t increasing in popularity over long and long long. I tend to use fixed width types or bool, (unsigned) char, int/unsigned, size_t, (u)intmax_t and leave signed char, (unsigned) short, (unsigned) long, (unsigned) long long for special cases.

Why do fixed width types delegate back to primitives?

I understand from both original and restated questions, that there is a misconception about guaranteed width integers (and I say guaranteed because not all types in stdint.h are of fixed width) and the actual problems they solve.

C/C++ define primitives such as int, long int, long long int etc. For simplicity let's focus on the most common of all, i.e. int. What C standard defines, is that int should be at least 16-bit wide. Though, compilers on all widely used x86 platforms, will actually provide you a 32-bit wide integer when you define an int. This happens because x86 processors can directly fetch a 32-bit wide field (word size of 32-bit x86 CPU) from memory, provide it as is to ALU for 32-bit arithmetic and store it back to memory, without having to do any shifts, padding etc. and that's pretty fast. But that's not the case for every compiler/architecture combination. If you work on an embedded device, with for example a very small MIPS processor, you will probably get a 16-bit wide integer from the compiler when you define an int. So, the width of primitives is specified by the compiler depending solely on hardware capabilities of the target platform, with respect to the minimum widths defined by the standard. And yes, on a strange architecture with e.g. a 25-bit ALU, you will probably be given a 25-bit int.

In order for a piece of C/C++ code to be portable among many different compiler/hardware combinations, stdint.h provides typedefs that guarantee you certain width (or minimum width). So, when for example you want to use a 16-bit signed integer (e.g. for saving memory, or mod-counters), you don't have to worry whether you should use an int or short, by simply using int16_t. The developers of the compiler will provide you a properly constructed stdint.h that will typedef the requested fixed-size integer into the actual primitive that implements it. That means, on x86 an int16_t will probably be defined as short, while on a small embedded device you may get an int, with all these mappings maintained by the compiler's developers.

Is MSVC right to find this method call ambiguous, whilst Clang/GCC don't?

In C++11 and before, any integral constant expression which evaluates to 0 is a considered a null pointer constant. This has been restricted in C++14: only integer literals with value 0 are considered. In addition, prvalues of type std::nullptr_t are null pointer constants since C++11. See [conv.ptr] and CWG 903.

Regarding overload resolution, both the integral conversion unsigned -> int64_t and the pointer conversion null pointer constant -> const char* have the same rank: Conversion. See [over.ics.scs] / Table 12.

So if ProtocolMinorVersion is considered a null pointer constant, then the calls are ambiguous. If you just compile the following program:

static const unsigned ProtocolMinorVersion = 0;

int main() {
const char* p = ProtocolMinorVersion;
}

You will see that clang and gcc reject this conversion, whereas MSVC accepts it.

Since CWG 903 is considered a defect, I'd argue that clang and gcc are right.



Related Topics



Leave a reply



Submit