How Does JavaScript Determine the Number of Digits to Produce When Formatting Floating-Point Values

How to format a float in javascript?

var result = Math.round(original*100)/100;

The specifics, in case the code isn't self-explanatory.

edit: ...or just use toFixed, as proposed by Tim Büthe. Forgot that one, thanks (and an upvote) for reminder :)

JavaScript seems to be doing floating point wrong (compared to C)

JavaScript’s default conversion of a Number to a string produces just enough decimal digits to uniquely distinguish the Number. (This arises out of step 5 in clause 7.1.12.1 of the ECMAScript 2018 Language Specification, which I explain a little here.) Formatting via console.log is not covered by the ECMAScript specification, but likely the Number is converted to a string using the same rules as for NumberToString.

Since stopping at the ten’s digit, producing 131621703842267140, is enough to distinguish the floating-point number from its two neighboring representable values, 131621703842267120 and 131621703842267152, JavaScript stops there.

You can request more digits with toPrecision; the following produces “131621703842267136.000”:

var x = 131621703842267136;
console.log(x.toPrecision(21))

How to deal with floating point number precision in JavaScript?

From the Floating-Point Guide:

What can I do to avoid this problem?

That depends on what kind of
calculations you’re doing.

  • If you really need your results to add up exactly, especially when you
    work with money: use a special decimal
    datatype.
  • If you just don’t want to see all those extra decimal places: simply
    format your result rounded to a fixed
    number of decimal places when
    displaying it.
  • If you have no decimal datatype available, an alternative is to work
    with integers, e.g. do money
    calculations entirely in cents. But
    this is more work and has some
    drawbacks.

Note that the first point only applies if you really need specific precise decimal behaviour. Most people don't need that, they're just irritated that their programs don't work correctly with numbers like 1/10 without realizing that they wouldn't even blink at the same error if it occurred with 1/3.

If the first point really applies to you, use BigDecimal for JavaScript or DecimalJS, which actually solves the problem rather than providing an imperfect workaround.

How does the Node console display floating point numbers

node.js is not showing you the exact values. The default for JavaScript conversion of Number to string is to use just enough decimal digits to distinguish the Number value from neighboring representable values. I do not know what method node.js uses, but simply using JavaScript’s default would explain it.

Why is 5726718050568503296 truncated in JS

The default rule for JavaScript when converting a Number value to a decimal numeral is to use just enough digits to distinguish the Number value. Specifically, this arises from step 5 in clause 7.1.12.1 of the ECMAScript 2017 Language Specification, per the linked answer. (It is 6.1.6.1.20 in the 2020 version.)

So while 5,726,718,050,568,503,296 is representable, printing it yields “5726718050568503000” because that suffices to distinguish it from the neighboring representable values, 5,726,718,050,568,502,272 and 5,726,718,050,568,504,320.

You can request more precision in the conversion to string with .toPrecision, as in x.toPrecision(21).

How does JavaScript runtime convert BINARY (Double-precision floating-point format) back to DECIMAL

JavaScript’s default conversion of a Number to a string produces just enough decimal digits to uniquely distinguish the Number. (This arises out of step 5 in clause 7.1.12.1 of the ECMAScript 2018 Language Specification, which I explain a little here.)

Let’s consider the conversion of a decimal numeral to a Number first. When a numeral is converted to a Number, its exact mathematical value is rounded to the nearest value representable in a Number. So, when 0.2 in source code is converted to a Number, the result is 0.200000000000000011102230246251565404236316680908203125.

When converting a Number to decimal, how many digits do we need to produce to uniquely distinguish the Number? In the case of 0.200000000000000011102230246251565404236316680908203125, if we produce “0.2”, we have a decimal numeral that, when again converted to Number, the result is 0.200000000000000011102230246251565404236316680908203125. Thus, “0.2” uniquely distinguishes 0.200000000000000011102230246251565404236316680908203125 from other Number values, so it is all we need.

In other words, JavaScript’s rule of producing just enough digits to distinguish the Number means that any short decimal numeral when converted to Number and back to string will produce the same decimal numeral (except with insignificant zeros removed, so “0.2000” will become “0.2” or “045” will become “45”). (Once the decimal numeral becomes long enough to conflict with the Number value, it may no longer survive a round-trip conversion. For example, “0.20000000000000003” will become the Number 0.2000000000000000388578058618804789148271083831787109375 and then the string “0.20000000000000004”.)

If, as a result of arithmetic, we had a number close to 0.200000000000000011102230246251565404236316680908203125 but different, such as 0.2000000000000000388578058618804789148271083831787109375, then JavaScript will print more digits, “0.20000000000000004” in this case, because it needs more digits to distinguish it from the “0.2” case.



Related Topics



Leave a reply



Submit