Set Back Default Floating Point Print Precision in C++

Set back default floating point print precision in C++

You can get the precision before you change it, with std::ios_base::precision and then use that to change it back later.

You can see this in action with:

#include <ios>
#include <iostream>
#include <iomanip>

int main (void) {
double d = 3.141592653589;
std::streamsize ss = std::cout.precision();
std::cout << "Initial precision = " << ss << '\n';

std::cout << "Value = " << d << '\n';

std::cout.precision (10);
std::cout << "Longer value = " << d << '\n';

std::cout.precision (ss);
std::cout << "Original value = " << d << '\n';

std::cout << "Longer and original value = "
<< std::setprecision(10) << d << ' '
<< std::setprecision(ss) << d << '\n';

std::cout << "Original value = " << d << '\n';

return 0;
}

which outputs:

Initial precision = 6
Value = 3.14159
Longer value = 3.141592654
Original value = 3.14159
Longer and original value = 3.141592654 3.14159
Original value = 3.14159

The code above shows two ways of setting the precision, first by calling std::cout.precision (N) and second by using a stream manipulator std::setprecision(N).


But you need to keep in mind that the precision is for outputting values via streams, it does not directly affect comparisons of the values themselves with code like:

if (val1== val2) ...

In other words, even though the output may be 3.14159, the value itself is still the full 3.141592653590 (subject to normal floating point limitations, of course).

If you want to do that, you'll need to check if it's close enough rather than equal, with code such as:

if ((fabs (val1 - val2) < 0.0001) ...

How to set precision of a float

You can't do that, since precision is determined by the data type (i.e. float or double or long double). If you want to round it for printing purposes, you can use the proper format specifiers in printf(), i.e. printf("%0.3f\n", 0.666666666).

Formatting floats: returning to default

std::defaultfloat doesn't reset the precision. (Don't ask me why). You could reset that to the default which is defined as 6:

std::cout << std::defaultfloat << std::setprecision(6) << f1 << " - " << f2;

Alternatively you could save the entire stream state before the operation and restore it after; see this thread for that.

Printf width specifier to maintain precision of floating-point value

I recommend @Jens Gustedt hexadecimal solution: use %a.

OP wants “print with maximum precision (or at least to the most significant decimal)”.

A simple example would be to print one seventh as in:

#include <float.h>
int Digs = DECIMAL_DIG;
double OneSeventh = 1.0/7.0;
printf("%.*e\n", Digs, OneSeventh);
// 1.428571428571428492127e-01

But let's dig deeper ...

Mathematically, the answer is "0.142857 142857 142857 ...", but we are using finite precision floating point numbers.
Let's assume IEEE 754 double-precision binary.
So the OneSeventh = 1.0/7.0 results in the value below. Also shown are the preceding and following representable double floating point numbers.

OneSeventh before = 0.1428571428571428 214571170656199683435261249542236328125
OneSeventh = 0.1428571428571428 49212692681248881854116916656494140625
OneSeventh after = 0.1428571428571428 769682682968777953647077083587646484375

Printing the exact decimal representation of a double has limited uses.

C has 2 families of macros in <float.h> to help us.

The first set is the number of significant digits to print in a string in decimal so when scanning the string back,
we get the original floating point. There are shown with the C spec's minimum value and a sample C11 compiler.

FLT_DECIMAL_DIG   6,  9 (float)                           (C11)
DBL_DECIMAL_DIG 10, 17 (double) (C11)
LDBL_DECIMAL_DIG 10, 21 (long double) (C11)
DECIMAL_DIG 10, 21 (widest supported floating type) (C99)

The second set is the number of significant digits a string may be scanned into a floating point and then the FP printed, still retaining the same string presentation. There are shown with the C spec's minimum value and a sample C11 compiler. I believe available pre-C99.

FLT_DIG   6, 6 (float)
DBL_DIG 10, 15 (double)
LDBL_DIG 10, 18 (long double)

The first set of macros seems to meet OP's goal of significant digits. But that macro is not always available.

#ifdef DBL_DECIMAL_DIG
#define OP_DBL_Digs (DBL_DECIMAL_DIG)
#else
#ifdef DECIMAL_DIG
#define OP_DBL_Digs (DECIMAL_DIG)
#else
#define OP_DBL_Digs (DBL_DIG + 3)
#endif
#endif

The "+ 3" was the crux of my previous answer.
Its centered on if knowing the round-trip conversion string-FP-string (set #2 macros available C89), how would one determine the digits for FP-string-FP (set #1 macros available post C89)? In general, add 3 was the result.

Now how many significant digits to print is known and driven via <float.h>.

To print N significant decimal digits one may use various formats.

With "%e", the precision field is the number of digits after the lead digit and decimal point.
So - 1 is in order. Note: This -1 is not in the initial int Digs = DECIMAL_DIG;

printf("%.*e\n", OP_DBL_Digs - 1, OneSeventh);
// 1.4285714285714285e-01

With "%f", the precision field is the number of digits after the decimal point.
For a number like OneSeventh/1000000.0, one would need OP_DBL_Digs + 6 to see all the significant digits.

printf("%.*f\n", OP_DBL_Digs    , OneSeventh);
// 0.14285714285714285
printf("%.*f\n", OP_DBL_Digs + 6, OneSeventh/1000000.0);
// 0.00000014285714285714285

Note: Many are use to "%f". That displays 6 digits after the decimal point; 6 is the display default, not the precision of the number.

What is c printf %f default precision?

The ANSI C standard, in section 7.19.6.1, says this about the f format specifier:

If the precision is missing, 6 digits are given

How do I print a double value with full precision using cout?

You can set the precision directly on std::cout and use the std::fixed format specifier.

double d = 3.14159265358979;
cout.precision(17);
cout << "Pi: " << fixed << d << endl;

You can #include <limits> to get the maximum precision of a float or double.

#include <limits>

typedef std::numeric_limits< double > dbl;

double d = 3.14159265358979;
cout.precision(dbl::max_digits10);
cout << "Pi: " << d << endl;

Why cout's default precision doesn't effect evaluated result?

why cout doesn't prints all the digits according to types default precision.

If you use std::fixed as well as setprecision, it will display however-many digits the precision asks for, without rounding and truncating.

As for why the rounding accounts for the output...

Let's get your code to print a couple other things too:

#include <iostream>
#include <iomanip>

int main ()
{
double x = 10-9.99;
std::cout << x << '\n';
std::cout << std::setprecision (16);
std::cout << x << '\n';
std::cout << 0.01 << '\n';
std::cout << std::setprecision (18);
std::cout << x << '\n';
std::cout << 0.01 << '\n';
std::cout << x - 0.01 << '\n';
}

And the output (on one specific compiler/system):

0.01  // x default
0.009999999999999787 // x after setprecision(16)
0.01 // 0.01 after setprecision(16)
0.00999999999999978684 // x after setprecision(18)
0.0100000000000000002 // 0.01 after setprecision(18)
-2.13370987545147273e-16 // x - 0.01

If we look at how 0.01 is directly encoded at 18 digit precision...

0.0100000000000000002
123456789012345678 // counting digits

...we can see clearly why it could get truncated to "0.01" during output at any precision up to 17.

You can also see clearly that there's a different value in x to that created by directly coding 0.01 - that's allowed because it's the result of a calculation, and dependent on a double or CPU-register approximation of 9.99, either or both of which have caused the discrepancy. That error is enough to prevent the rounding to "0.01" at precision 16.

Unfortunately, this kind of thing is normal when handling doubles and floats.



Related Topics



Leave a reply



Submit