Does JavaScript Support 64-Bit Integers

Does JavaScript support 64-bit integers?

JavaScript represents numbers using IEEE-754 double-precision (64 bit) format. As I understand it this gives you 53 bits precision, or fifteen to sixteen decimal digits. Your number has more digits than JavaScript can cope with, so you end up with an approximation.

This isn't really "mishandling" as such, but obviously it isn't very helpful if you need full precision on large numbers. There are a few JS libraries around that can handle larger numbers, e.g., BigNumber and Int64.

Can a JavaScript implementation use 64-bit integers?

MDN information about UInt64:
As JavaScript doesn't currently include standard support for 64-bit integer values, js-ctypes offers the Int64 and UInt64 objects to let you work with C functions and data that need (or may need) to use data represented using a 64-bit data type.

You use the UInt64 object to create and manipulate 64-bit unsigned integers.

Native 64bit integers in Javascript

Yes, it is. There's a stage 3 proposal for arbitrary-bigints (including enough to allow implementations to specialize fixed 64-bit use), so it's literally just waiting for implementations and tests at this point. Therefore, one can conclude that it's technically possible.

How to do 64bit Integer arithmetic in Node.js?

Javascript does not support 64 bit integers, because the native number type is a 64-bit double, giving only 53 bits of integer range.

You can create arrays of 32-bit numbers (i.e. Uint32Array) but if there were a 64-bit version of those there'd be no way to copy values from it into standalone variables.

There are some modules around to provide 64bit integer support:

  • node-bigint
  • bignum (based on OpenSSL)
  • int64

Maybe your problem can be solved using one of those libraries.

Javascript does not support 64 bit integer, why new Date().getTime() return 41 bits number?

You've answered your own question with the links you have provided.

This link: Does JavaScript support 64-bit integers?. Which explains that javascript is limited to 53 bits due to it's support of IEEE-754 double-precision (64 bit) format.

And this link: GraphQL BigInt Which explains the existence of that package because GraphQl only supports 32 bit integers

The GraphQL spec limits its Int type to 32-bits. Maybe you've seen this error before:

GraphQLError: Argument "num" has invalid value 9007199254740990.
Expected type "Int", found 9007199254740990.

Why? 64-bits would be too large for JavaScript's 53-bit limit. According to Lee Byron, a 52-bit integer spec would have been "too weird" see this issue. The spec therefore has 32-bit integers to ensure portability to languages that can't represent 64-bit integers.

None of this has anything to do with Date.prototype.getTime() returning 41 bits. Which (by the way) is all it takes for a numeric timestamp that has milliseconds. So my confusion is "What is it you are confused about?"

bitwise AND in Javascript with a 64 bit integer

Javascript represents all numbers as 64-bit double precision IEEE 754 floating point numbers (see the ECMAscript spec, section 8.5.) All positive integers up to 2^53 can be encoded precisely. Larger integers get their least significant bits clipped. This leaves the question of how can you even represent a 64-bit integer in Javascript -- the native number data type clearly can't precisely represent a 64-bit int.

The following illustrates this. Although javascript appears to be able to parse hexadecimal numbers representing 64-bit numbers, the underlying numeric representation does not hold 64 bits. Try the following in your browser:

<html>
<head>
<script language="javascript">
function showPrecisionLimits() {
document.getElementById("r50").innerHTML = 0x0004000000000001 - 0x0004000000000000;
document.getElementById("r51").innerHTML = 0x0008000000000001 - 0x0008000000000000;
document.getElementById("r52").innerHTML = 0x0010000000000001 - 0x0010000000000000;
document.getElementById("r53").innerHTML = 0x0020000000000001 - 0x0020000000000000;
document.getElementById("r54").innerHTML = 0x0040000000000001 - 0x0040000000000000;
}
</script>
</head>
<body onload="showPrecisionLimits()">
<p>(2^50+1) - (2^50) = <span id="r50"></span></p>
<p>(2^51+1) - (2^51) = <span id="r51"></span></p>
<p>(2^52+1) - (2^52) = <span id="r52"></span></p>
<p>(2^53+1) - (2^53) = <span id="r53"></span></p>
<p>(2^54+1) - (2^54) = <span id="r54"></span></p>
</body>
</html>

In Firefox, Chrome and IE I'm getting the following. If numbers were stored in their full 64-bit glory, the result should have been 1 for all the substractions. Instead, you can see how the difference between 2^53+1 and 2^53 is lost.

(2^50+1) - (2^50) = 1
(2^51+1) - (2^51) = 1
(2^52+1) - (2^52) = 1
(2^53+1) - (2^53) = 0
(2^54+1) - (2^54) = 0

So what can you do?

If you choose to represent a 64-bit integer as two 32-bit numbers, then applying a bitwise AND is as simple as applying 2 bitwise AND's, to the low and high 32-bit 'words'.

For example:

var a = [ 0x0000ffff, 0xffff0000 ];
var b = [ 0x00ffff00, 0x00ffff00 ];
var c = [ a[0] & b[0], a[1] & b[1] ];

document.body.innerHTML = c[0].toString(16) + ":" + c[1].toString(16);

gets you:

ff00:ff0000

Native support for 64 bit floating point in Javascript

(Q1) What is the technical reason behind, being unable to represent big numbers in the general way?

Breaking the web. The fundamentals of JavaScript numbers cannot be changed now, 20-odd years after they were originally defined. There's also the issue of performance: JavaScript's current numbers (IEEE-754 binary double [64-bit] precision) are very fast floating point thanks to being built into CPUs and math coprocessors. The cost of that speed is precision; the cost of arbitrary precision (or a dramatically larger precise range) is performance.

Someday in the future, perhaps JavaScript will get IEEE-754 64-bit or even 128-bit decimal floating point numbers (see here and here), if those formats (introduced in 2008) diffuse into the ecosystem and get hardware support. But that's speculation on my part. :-)

(Q2) Why can't Javascript represent all numbers using some other standard which doesn't lose precision?

See Q1. :-)

(Q3) What complications would arise if Javascript actually did that?

See Q1. :-)

Even according to the proposal, we have to use a special constructor for declaring big numbers.

If you want 64-bit specifically. If you just want BigInts, the proposal includes a new notation, the n suffix, for that: 2n is a BigInt 2. So with BigInts, your example would be

let bigNum = 2n ** 64n;


Related Topics



Leave a reply



Submit