JavaScript Runtime Complexity of Array Functions

JavaScript runtime complexity of Array functions

The ECMA specification does not specify a bounding complexity, however, you can derive one from the specification's algorithms.

push is O(1), however, in practice it will encounter an O(N) copy costs at engine defined boundaries as the slot array needs to be reallocated. These boundaries are typically logarithmic.

pop is O(1) with a similar caveat to push but the O(N) copy is rarely encountered as it is often folded into garbage collection (e.g. a copying collector could only copy the used part of an array).

shift is at worst O(N) however it can, in specially cases, be implemented as O(1) at the cost of slowing down indexing so your mileage may vary.

slice is O(N) where N is end - start. Not a tremendous amount of optimization opportunity here without significantly slowing down writes to both arrays.

splice is, worst case, O(N). There are array storage techniques that divide N by a constant but they significantly slow down indexing. If an engine uses such techniques you might notice unusually slow operations as it switches between storage techniques triggered by access pattern changes.

One you didn't mention, is sort. It is, in the average case, O(N log N). However, depending on the algorithm chosen by the engine, you could get O(N^2) in some cases. For example, if the engine uses QuickSort (even with an late out to InsertionSort), it has well-known N^2 cases. This could be a source of DoS for your application. If this is a concern either limit the size of the arrays you sort (maybe merging the sub-arrays) or bail-out to HeapSort.

Time complexity of slice in javascript v8 runtime

(V8 developer here.)

Array.prototype.slice is O(n), where n is the number of elements in the slice.

String.prototype.slice is O(1), thanks to our implementation of SlicedStrings, which are just storing pointer, offset, length to the original string and avoid copying the characters (except when they're tiny, so that copying a handful of characters is actually cheaper and smaller than storing a reference; that's still O(1)).

The key difference is that strings are immutable, and arrays are not. When you do str1 = "Hello World"; str2 = str1.slice(2, 5);, since there is no way to modify str1's contents afterwards, str2 doesn't need to ensure that it's unaffected by any such modification.

When you do a = [1, 2, 3, 4]; b = a.slice(1, 3); a[1] = "changed"; console.log(b[0]);, then you expect to see 2, not "changed". That's why b has to be an actual copy. (In theory, a copy-on-write approach would be possible, but V8 doesn't do that for array slices.)

"Shallow copy" means that nested objects will not be copied. Example:

let nested = {property: "value"};
var a = [nested];
var b = a.slice(0, 1);
a[0].property = "new value";
console.log(a === b); // false, `b` is a copy
console.log(a[0] === b[0]); // true, `nested` was not copied
console.log(b[0] === nested); // true
console.log(b[0].property); // "new value"

Time complexity of Javascript Array.find() in modern browsers

V8 developer here. The time complexity of Array.prototype.find is O(n) (with n being the array's length), and it's fair to assume that it will remain that way.

Generally speaking, it's often impossible for engines to improve the complexity class of an operation. In case of Array.prototype.find, the predicate function you pass might well care how often it gets called:

[1, 2, 3].find((value, index, object) => {
console.log(`Checking ${value}...`); // Or any other side effect.
return value === 42;
});

In such a case, the engine has no choice but to iterate over the entire array in exactly the right order, because anything else would observably break your program's behavior.

In theory, since JS engines can do dynamic optimizations, they could inspect the predicate function, and if it has no side effects, they could use it to build up some sort of index/cache. Aside from the difficulty of building such a system that works for arbitrary predicates, this technique even when it does work would only speed up repeated searches of the same array with the same function, at the cost of wasting time and memory if this exact same scenario will not occur again. It seems unlikely that an engine can ever make this prediction with sufficient confidence to justify investing this time and memory.

As a rule of thumb: when operating on large data sets, choosing efficient algorithms and data structures is worth it. Typically far more worth it than the micro-optimizations we're seeing so much in SO questions :-)

A highly optimized/optimizing engine may be able to make your O(n) code somewhere between 10% and 10x as fast as it would otherwise be. By switching to an O(log n) or O(1) solution on your end, you can speed it up by orders of magnitude. That's often accomplished by doing something that engines can't possibly do. For example, you can keep your array sorted and then use binary search over it -- that's something an engine can't do for you automatically because obviously it's not allowed to reorder your array's contents without your approval. And as @myf already points out in a comment: if you want to access things by a unique key, then using a Map will probably work better than using an Array.

That said, simple solutions tend to scale better than we intuitively assume; the standard warning against premature optimizations applies here just as everywhere else. Linearly searching through arrays is often just fine, you don't need a (hash) map just because you have more than three items in it. When in doubt, profile your app to find out where the performance bottlenecks are.

What is the time complexity of Javascript Array.reduce() and Array.find()?

The minimum time complexity of .reduce is O(n), because it must iterate through all elements once (assuming an error isn't thrown), but it can be unbounded (since you can write any code you want inside the callback).

For your

  // Loop, O(n), n = length of arr:
for (let [idx, val] of arr.entries()) {
// .reduce, O(n), n = length of complements:
if (complements.reduce((acc, v) => (acc || v.value === val), false)) {
// If test succeeds, .find, O(n), n = length of complements:
return [complements.find(v => v.value === target - val).index, idx];
}

complements.push({index: idx, value: target - val});
}

the time complexity is, worst case, O(n^2). The reduce runs in O(n) time, and you run a reduce for every entry in arr, making it O(n^2).

(The .find is also an O(n) operation, but O(n) + O(n) = O(n))

Your code that sorts the array beforehand has the right idea for decreasing complexity, but it has a couple flaws.

  • First, you should sort numerically ((a, b) => a - b)); .sort() with no arguments will sort lexiographically (eg, [1, 11, 2] is not desirable).

  • Second, just decrementing idxFromBack isn't enough: for example, sumOfTwo([1, 3, 8, 9, 9], 9) will not see that 1 and 8 are a match. Perhaps the best strategy here would be to oscillate with while instead: from a idxFromBack, iterate backwards until a match is found or the sum is too small, and also iterate forwards until a match is found or the sum is too large.

You can also improve the performance of this code by sorting not with .sort((a, b) => a - b), which has complexity of O(n log n), but with radix sort or counting sort instead (both of which have complexity of O(n + k), where k is a constant). The optimal algorithm will depend on the general shape and variance of the input.

An even better, altogether different O(n) strategy would be to use a Map or object. When iterating over the array, put the value which would result in a match for the current item into a key of the object (where the value is the current index), and just look to see if the current value being iterated over exists in the object initially:

const sumOfTwo = (arr, target) => {  const obj = {};  for (const [i, num] of arr.entries()) {    if (obj.hasOwnProperty(String(num))) {      return [obj[num], i];    }    const matchForThis = target - num;    obj[matchForThis] = i;  }  return [];};
console.log( sumOfTwo([1, 2, 4, 4], 8), // => [2, 3] sumOfTwo([1, 2, 8, 9], 9), // 1 + 8 = 9; [0, 2] sumOfTwo([1, 2, 3, 9], 8) // => []);

Complexity of Array methods

An undisputable method is to do a comparative benchmark (provided you do it correctly and the other party is not bad faith). Don't care about theoretical complexities.

Otherwise you will spend a hard time convincing someone who doesn't see the obvious: a shift moves every element once, while two reversals move it twice.

By the way, a shift is optimal, as every element has to move at least once. And if properly implemented as a memmov, it is very fast (while a single reversal cannot be as fast).

Time complexity of Array.from

It's O(n). When used on an iterable (like a Set), Array.from iterates over the iterable and puts every item returned into the new array, so there's an operation for every item returned by the iterable.

What is Array.prototype.sort() time complexity?

Firefox uses merge sort. Chrome, as of version 70, uses a hybrid of merge sort and insertion sort called Timsort.

The time complexity of merge sort is O(n log n). While the specification does not specify the sorting algorithm to use, in any serious environment, you can probably expect that sorting larger arrays does not take longer than O(n log n) (because if it did, it would be easy to change to a much faster algorithm like merge sort, or some other log-linear method).

While comparison sorts like merge sort have a lower bound of O(n log n) (i.e. they take at least this long to complete), Timsort takes advantages of "runs" of already ordered data and so has a lower bound of O(n).

Big O of JavaScript arrays

NOTE: While this answer was correct in 2012, engines use very different internal representations for both objects and arrays today. This answer may or may not be true.

In contrast to most languages, which implement arrays with, well, arrays, in Javascript Arrays are objects, and values are stored in a hashtable, just like regular object values. As such:

  • Access - O(1)
  • Appending - Amortized O(1) (sometimes resizing the hashtable is required; usually only insertion is required)
  • Prepending - O(n) via unshift, since it requires reassigning all the indexes
  • Insertion - Amortized O(1) if the value does not exist. O(n) if you want to shift existing values (Eg, using splice).
  • Deletion - Amortized O(1) to remove a value, O(n) if you want to reassign indices via splice.
  • Swapping - O(1)

In general, setting or unsetting any key in a dict is amortized O(1), and the same goes for arrays, regardless of what the index is. Any operation that requires renumbering existing values is O(n) simply because you have to update all the affected values.

Time complexity of the includes method in JavaScript

Spec describes this function as linear search. Array.prototype.includes

  1. Let O be ? ToObject(this value).

  2. Let len be ? ToLength(? Get(O, "length")).

  3. If len is 0, return false.
  4. Let n be ? ToInteger(fromIndex). (If fromIndex is undefined, this step produces the value 0.)
  5. If n ≥ 0, then Let k be n.
  6. Else n < 0, Let k be len + n. If k < 0, let k be 0.
  7. Repeat, while k < len ... Increase k by 1.

Which is quite a reasonable choice in general case (list is not sorted, list is not uniform, you do not maintain additional datastructures along with the list itself).



Related Topics



Leave a reply



Submit