Why Was Std::Pow(Double, Int) Removed from C++11

Why was std::pow(double, int) removed from C++11?

double pow(double, int);

hasn't been removed from the spec. It has simply been reworded. It now lives in [c.math]/p11. How it is computed is an implementation detail. The only C++03 signature that has changed is:

float pow(float, int);

This now returns double:

double pow(float, int);

And this change was done for C compatibility.

Clarification:

26.8 [cmath] / p11 says:

Moreover, there shall be additional
overloads sufficient to ensure:

  1. If any argument corresponding to a double parameter has type long double,
    then all arguments corresponding to
    double parameters are effectively cast
    to long double.

  2. Otherwise, if any argument corresponding to a double parameter
    has type double or an integer type,
    then all arguments corresponding to
    double parameters are effectively cast
    to double.

  3. Otherwise, all arguments corresponding to double parameters are
    effectively cast to float.

This paragraph implies a whole host of overloads, including:

double pow(double, int);
double pow(double, unsigned);
double pow(double, unsigned long long);

etc.

These may be actual overloads, or may be implemented with restricted templates. I've personally implemented it both ways and strongly favor the restricted template implementation.

Second update to address optimization issues:

The implementation is allowed to optimize any overload. But recall that an optimization should be only that. The optimized version ought to return the same answer. The experience from implementors of functions like pow is that by the time you go to the trouble to ensure that your implementation taking an integral exponent gives the same answer as the implementation taking a floating point exponent, the "optimization" is often slower.

As a demonstration the following program prints out pow(.1, 20) twice, once using std::pow, and the second time using an "optimized" algorithm taking advantage of the integral exponent:

#include <cmath>
#include <iostream>
#include <iomanip>

int main()
{
std::cout << std::setprecision(17) << std::pow(.1, 20) << '\n';
double x = .1;
double x2 = x * x;
double x4 = x2 * x2;
double x8 = x4 * x4;
double x16 = x8 * x8;
double x20 = x16 * x4;
std::cout << x20 << '\n';
}

On my system this prints out:

1.0000000000000011e-20
1.0000000000000022e-20

Or in hex notation:

0x1.79ca10c92422bp-67
0x1.79ca10c924232p-67

And yes, implementors of pow really do worry about all of those bits down at the low end.

So while the freedom is there to shuffle pow(double, int) off to a separate algorithm, most implementors I'm aware of have given up on that strategy, with the possible exception of checking for very small integral exponents. And in that event, it is usually advantageous to put that check in the implementation with the floating point exponent so as to get the biggest bang for your optimization buck.

Does C++11 enforce pow(double, int) to use the slower pow(double, double)?

First of all, yes this behaviour is consistent with the C++11 standard (though not with C++03), which in section 26.8, paragraph 11 says:

Moreover, there shall be additional overloads sufficient to ensure:

  1. If any argument corresponding to a double parameter has type long double, then all arguments corresponding to double parameters are effectively cast to long double.

  2. Otherwise, if any argument corresponding to a double parameter has type double or an integer type, then all arguments corresponding to double parameters are effectively cast to double.

  3. Otherwise, all arguments corresponding to double parameters are effectively cast to float.

(In addition to the overloads for float-only, double-only and long double-only.)

So the implementation actually has to cast that integer argument into a double and I don't think there is a possibility for a conforming library to provide a faster std::pow for integer powers, apart from maybe checking the double argument for integrality (is that a word?) and using a special path in this case.

In order to provide a platform-independent faster way, the only thing that comes to my mind would be to write a custom wrapper that delegates to this non-standard power if it is present. Other than that I don't know how you could infuse that behaviour into std::pow again without writing your own implementation.

EDIT: Yet when looking at this answer it is indeed possible for an implementation to still provide an optimized overload for integer powers as long as it behaves exactly like std::pow(double(x), double(y)). So there is a possiblity for an implementation to provide that faster version, yet I wouldn't count so much on this as you have done in C++03 (where it was IMHO even part of the standard, but I might be wrong).

Why isn't `int pow(int base, int exponent)` in the standard C++ libraries?

As of C++11, special cases were added to the suite of power functions (and others). C++11 [c.math] /11 states, after listing all the float/double/long double overloads (my emphasis, and paraphrased):

Moreover, there shall be additional overloads sufficient to ensure that, if any argument corresponding to a double parameter has type double or an integer type, then all arguments corresponding to double parameters are effectively cast to double.

So, basically, integer parameters will be upgraded to doubles to perform the operation.


Prior to C++11 (which was when your question was asked), no integer overloads existed.

Since I was neither closely associated with the creators of C nor C++ in the days of their creation (though I am rather old), nor part of the ANSI/ISO committees that created the standards, this is necessarily opinion on my part. I'd like to think it's informed opinion but, as my wife will tell you (frequently and without much encouragement needed), I've been wrong before :-)

Supposition, for what it's worth, follows.

I suspect that the reason the original pre-ANSI C didn't have this feature is because it was totally unnecessary. First, there was already a perfectly good way of doing integer powers (with doubles and then simply converting back to an integer, checking for integer overflow and underflow before converting).

Second, another thing you have to remember is that the original intent of C was as a systems programming language, and it's questionable whether floating point is desirable in that arena at all.

Since one of its initial use cases was to code up UNIX, the floating point would have been next to useless. BCPL, on which C was based, also had no use for powers (it didn't have floating point at all, from memory).

As an aside, an integral power operator would probably have been a binary operator rather than a library call. You don't add two integers with x = add (y, z) but with x = y + z - part of the language proper rather than the library.

Third, since the implementation of integral power is relatively trivial, it's almost certain that the developers of the language would better use their time providing more useful stuff (see below comments on opportunity cost).

That's also relevant for the original C++. Since the original implementation was effectively just a translator which produced C code, it carried over many of the attributes of C. Its original intent was C-with-classes, not C-with-classes-plus-a-little-bit-of-extra-math-stuff.

As to why it was never added to the standards before C++11, you have to remember that the standards-setting bodies have specific guidelines to follow. For example, ANSI C was specifically tasked to codify existing practice, not to create a new language. Otherwise, they could have gone crazy and given us Ada :-)

Later iterations of that standard also have specific guidelines and can be found in the rationale documents (rationale as to why the committee made certain decisions, not rationale for the language itself).

For example the C99 rationale document specifically carries forward two of the C89 guiding principles which limit what can be added:

  • Keep the language small and simple.
  • Provide only one way to do an operation.

Guidelines (not necessarily those specific ones) are laid down for the individual working groups and hence limit the C++ committees (and all other ISO groups) as well.

In addition, the standards-setting bodies realise that there is an opportunity cost (an economic term meaning what you have to forego for a decision made) to every decision they make. For example, the opportunity cost of buying that $10,000 uber-gaming machine is cordial relations (or probably all relations) with your other half for about six months.

Eric Gunnerson explains this well with his -100 points explanation as to why things aren't always added to Microsoft products- basically a feature starts 100 points in the hole so it has to add quite a bit of value to be even considered.

In other words, would you rather have a integral power operator (which, honestly, any half-decent coder could whip up in ten minutes) or multi-threading added to the standard? For myself, I'd prefer to have the latter and not have to muck about with the differing implementations under UNIX and Windows.

I would like to also see thousands and thousands of collection the standard library (hashes, btrees, red-black trees, dictionary, arbitrary maps and so forth) as well but, as the rationale states:

A standard is a treaty between implementer and programmer.

And the number of implementers on the standards bodies far outweigh the number of programmers (or at least those programmers that don't understand opportunity cost). If all that stuff was added, the next standard C++ would be C++215x and would probably be fully implemented by compiler developers three hundred years after that.

Anyway, that's my (rather voluminous) thoughts on the matter. If only votes were handed out based on quantity rather than quality, I'd soon blow everyone else out of the water. Thanks for listening :-)

C++ function std::pow() returns inf instead of a value

Your problem comes from the -(m-period+1) part of your call to pow. period is declared as

unsigned int period = 1;

so when

-(m-period+1)

gets evaluated you have

   -(int - unsigned int + int)
== -(unsigned int)

so you get 360 as an unsigned int and when you negate it, it wraps around and becomes a very large number (4294966936 for a 32 bit int). That means you are doing

1.0033333333333334 ^ 4294966936

not

1.0033333333333334 ^ -360

You need to make period an int to get the correct results.


If you have a number that must not be negative, don't use an unsigned type. Nothing about unsigned stops negative numbers, it just turns them into a positive number. If you want to make sure a number isn't negative, use a signed type and an if statement.

C++11 in conjunction with OpenMP gives slower executable

The reason is that the version without -std=c++11 uses std::pow(double,int), which is apparently not available in C++11, and faster than std::pow(double,double). If you replace your integers (3, 5, etc.) by doubles (3.0, 5.0, etc.), you will get the same speed.

EDIT:
Here are my timings with g++ version 4.8.4:

Original version:
-O3 -fopenmp : 10.678 s
-O3 -fopenmp -std=c++11 : 36.994 s
Adding ".0" after the integers:
-O3 -fopenmp : 36.679 s
-O3 -fopenmp -std=c++11 : 36.938 s

Why does std::pow of a float and int invoke this overload?

Since C++11, mixed argument pow has any integral argument cast to double. The return type of the mixed argument functions is always double except when one argument is long double then the result is long double.

[c.math]

In addition to the double versions of the math functions in , C++ adds float and long double overloaded versions of these functions, with the same semantics.

Moreover, there shall be additional overloads sufficient to ensure:

  • If any argument corresponding to a double parameter has type long double, then all arguments corresponding to double parameters are effectively cast to long double.

  • Otherwise, if any argument corresponding to a double parameter has type double or an integer type, then all arguments corresponding to double parameters are effectively cast to double.

  • Otherwise, all arguments corresponding to double parameters are effectively cast to float.

So to sum up:

  • One argument long double >> long double
  • Both arguments float >> float
  • Otherwise >> double

error: ‘int pow(double, int)’ conflicts with a previous declaration int pow(double a, int n) {

I have no idea what the original reason for introducing the pow definitions in this code were (especially since they are guarded by implementation macros), but in conformant standard C++ the definition

int pow(double a, int n) {
return pow(a, (double)n);
}

in global namespace is going to cause trouble. Depending on the standard version, whether <math.h> and/or <cmath> are included directly or indirectly and dependent on the unspecified standard library implementation details, this may or may not be a valid definition.

The C++ standard library already offers an overload with signature pow(double, int) or potentially a template pow accepting these arguments and in the former case, the definition in user code will be an invalid redefinition if this overload/template is placed into the global namespace (including math.h always does that, including <cmath> may do that).

The second overload

int pow(int a, int n) {
// versions for float/double are defined in stdlib.
int r = a;
for (int i = 1; i < n; i++) r *= a;
return r;
}

will cause similar issues, but only since C++11 (this overload did not exist before that and it has different semantics than the version here in C++11 and up).

Therefore this is simply a bug in the code. As workaround I noticed that (for some reason that is not clear to me) GCC versions 5.x and below do not seem to error on the definition, even though I tried to make sure that they would. Since 6.1 the overload always causes an error together with #include<math.h> and -std=c++89/-std=gnu++89.

You may also want to try compiling against -std=c++11 or -std=gnu++11, because since C++11 implementations are allowed to implement the pow overloads as template functions, in which case no redefinition error would occur. (I think that was not allowed before C++11.) This seems to be the case with GCC in my testing.

C++11 round off error using pow() and std::complex

In C++11 we have a few new overloads of pow(std::complex). GCC has two nonstandard overloads on top of that, one for raising to an int and one for raising to an unsigned int.

One of the new standard overloads (namely std::complex</*Promoted*/> pow(const std::complex<T> &, const U &)) causes an ambiguity when calling pow(i, 2) with the non-standard ones. Their solution is to #ifdef the non-standard ones out in the presence of C++11 and you go from calling the specialized function (which uses successive squaring) to the generic method (which uses pow(double,double) and std::polar).

Why does floor(pow(64,1.0/3)) return 3 but print 4 when the floor() is removed in C++?

You are dealing with good old floating-point imprecision. See What is the numerical stability of std::pow() compared to iterated multiplication?, and especially Pascal Cuoq's answer, for an in-depth explanation why the result of std::pow in particular will be imprecise. Because of rounding errors, you will occasionally get a result that is ever so slightly less than 4, and so std::floor will round down to 3.

The answer I linked above says:

A quality implementation of pow will give you 1 ULP of accuracy for its result, and the best implementations will “guarantee” 0.5 ULP.

ULP here refers to the Unit of Least Precision or Unit in the Last Place. Knowing about this error, you can increase the std::pow() result before calling std::floor. A good way to do this is using std::nextafter, which gives you the next-larger representable floating-point value (i.e., 1 ULP up). I think that if Pascal's statement on the precision of std::pow() holds, calling nextafter once should put you back above 4, in your particular example. Here's the code I recommend:

template <typename T>
T floorroot2(T x, T e)
{
const auto r = std::pow(x, T(1.0)/e);
return std::floor(std::nextafter(r, r+1));
}

That works for me (live example), but if you want to be extra sure if you don't trust your library's pow implementation, you may want to add 2 ULPs, i.e. call nextafter(nextafter(r, r+1), r+1).



Related Topics



Leave a reply



Submit