What Makes Sets Faster Than Lists

What makes sets faster than lists?

Sets are implemented using hash tables. Whenever you add an object to a set, the position within the memory of the set object is determined using the hash of the object to be added. When testing for membership, all that needs to be done is basically to look if the object is at the position determined by its hash, so the speed of this operation does not depend on the size of the set. For lists, in contrast, the whole list needs to be searched, which will become slower as the list grows.

This is also the reason that sets do not preserve the order of the objects you add.

Note that sets aren't faster than lists in general -- membership test is faster for sets, and so is removing an element. As long as you don't need these operations, lists are often faster.

Python Sets vs Lists

It depends on what you are intending to do with it.

Sets are significantly faster when it comes to determining if an object is present in the set (as in x in s), but its elements are not ordered so you cannot access items by index as you would in a list. Sets are also somewhat slower to iterate over in practice.

You can use the timeit module to see which is faster for your situation.

Is set() faster than list(), python?

Python’s set class represents the mathematical notion of a set, namely a collection
of elements, without duplicates, and without an inherent order to those elements.
The major advantage of using a set, as opposed to a list, is that it has a highly
an optimized method for checking whether a specific element is contained in the set.
This is based on a data structure known as a hash table

However, there are two important restrictions due to the
algorithmic underpinnings. The first is that the set does not maintain the elements
in any particular order. The second is that only instances of immutable types can be
added to a Python set. Therefore, objects such as integers, floating-point numbers,
and character strings are eligible to be elements of a set. It is possible to maintain a
set of tuples, but not a set of lists or a set of sets, as lists and sets are mutable.

Which is faster and why? Set or List?

Membership testing in a set is vastly faster, especially for large sets. That is because the set uses a hash function to map to a bucket. Since Python implementations automatically resize that hash table, the speed can be constant (O(1)) no matter the size of the set (assuming the hash function is sufficiently good).

In contrast, to evaluate whether an object is a member of a list, Python has to compare every single member for equality, i.e. the test is O(n).

Are sets really faster than lists?

What you've been told is correct, searching in a set is O(1) since members are stored using a hash table. Searching in an (unsorted) array is O(n).

The problem with your tests is that you're both creating the set/array and searching it in the same line. In this case, you're both testing the speed of inserting all the items, and then searching for a single entry.

Try something like this instead:

test_range = range(10000000)
test_set = set(test_range)
test_array = list(test_range)

timeit.timeit('10000 in test_set', number=10)
timeit.timeit('10000 in test_array', number=10)

Why is converting a list to a set faster than using just list to compute a list difference?

There is overhead to convert a list to a set, but a set is substantially faster than a list for those in tests.

You can instantly see if item x is in set y because there's a hash table being used underneath. No matter how large your set is, the lookup time is the same (basically instantaneous) - this is known in Big-O notation as O(1). For a list, you have to individually check every element to see if item x is in list z. As your list grows, the check will take longer - this is O(n), meaning the length of the operation is directly tied to how long the list is.

That increased speed can offset the set creation overhead, which is how your set check ends up being faster.

EDIT: to answer that other question, Python has no way of determining that your list is sorted - not if you're using a standard list object, anyway. So it can't achieve O(log n) performance with a list comprehension. If you wanted to write your own binary search method which assumes the list is sorted, you can certainly do so, but O(1) beats O(log n) any day.

EDIT 2:

I'm aware that a set's average O(1) lookup time beats that of a list's
O(n) but if the original list A contains about a million or so
integers, wouldn't the set creation actually take longer?

No, not at all. Creating a set out of a list is a O(n) operation, as inserting an item into a set is O(1) and you're doing that n times. If you have a list with a million integers in it, converting it into a set involves two O(n) steps, while repeatedly scanning the list is going to be n O(n) steps. In practice, creating the set is going to be about 250,000 times faster for a list with a million integers, and the speed difference will grow larger and larger the more items you have in your list.

Why is [] faster than list()?

Because [] and {} are literal syntax. Python can create bytecode just to create the list or dictionary objects:

>>> import dis
>>> dis.dis(compile('[]', '', 'eval'))
1 0 BUILD_LIST 0
3 RETURN_VALUE
>>> dis.dis(compile('{}', '', 'eval'))
1 0 BUILD_MAP 0
3 RETURN_VALUE

list() and dict() are separate objects. Their names need to be resolved, the stack has to be involved to push the arguments, the frame has to be stored to retrieve later, and a call has to be made. That all takes more time.

For the empty case, that means you have at the very least a LOAD_NAME (which has to search through the global namespace as well as the builtins module) followed by a CALL_FUNCTION, which has to preserve the current frame:

>>> dis.dis(compile('list()', '', 'eval'))
1 0 LOAD_NAME 0 (list)
3 CALL_FUNCTION 0
6 RETURN_VALUE
>>> dis.dis(compile('dict()', '', 'eval'))
1 0 LOAD_NAME 0 (dict)
3 CALL_FUNCTION 0
6 RETURN_VALUE

You can time the name lookup separately with timeit:

>>> import timeit
>>> timeit.timeit('list', number=10**7)
0.30749011039733887
>>> timeit.timeit('dict', number=10**7)
0.4215109348297119

The time discrepancy there is probably a dictionary hash collision. Subtract those times from the times for calling those objects, and compare the result against the times for using literals:

>>> timeit.timeit('[]', number=10**7)
0.30478692054748535
>>> timeit.timeit('{}', number=10**7)
0.31482696533203125
>>> timeit.timeit('list()', number=10**7)
0.9991960525512695
>>> timeit.timeit('dict()', number=10**7)
1.0200958251953125

So having to call the object takes an additional 1.00 - 0.31 - 0.30 == 0.39 seconds per 10 million calls.

You can avoid the global lookup cost by aliasing the global names as locals (using a timeit setup, everything you bind to a name is a local):

>>> timeit.timeit('_list', '_list = list', number=10**7)
0.1866450309753418
>>> timeit.timeit('_dict', '_dict = dict', number=10**7)
0.19016098976135254
>>> timeit.timeit('_list()', '_list = list', number=10**7)
0.841480016708374
>>> timeit.timeit('_dict()', '_dict = dict', number=10**7)
0.7233691215515137

but you never can overcome that CALL_FUNCTION cost.

Why is tuple faster than list in Python?

The reported "speed of construction" ratio only holds for constant tuples (ones whose items are expressed by literals). Observe carefully (and repeat on your machine -- you just need to type the commands at a shell/command window!)...:

$ python3.1 -mtimeit -s'x,y,z=1,2,3' '[x,y,z]'
1000000 loops, best of 3: 0.379 usec per loop
$ python3.1 -mtimeit '[1,2,3]'
1000000 loops, best of 3: 0.413 usec per loop

$ python3.1 -mtimeit -s'x,y,z=1,2,3' '(x,y,z)'
10000000 loops, best of 3: 0.174 usec per loop
$ python3.1 -mtimeit '(1,2,3)'
10000000 loops, best of 3: 0.0602 usec per loop

$ python2.6 -mtimeit -s'x,y,z=1,2,3' '[x,y,z]'
1000000 loops, best of 3: 0.352 usec per loop
$ python2.6 -mtimeit '[1,2,3]'
1000000 loops, best of 3: 0.358 usec per loop

$ python2.6 -mtimeit -s'x,y,z=1,2,3' '(x,y,z)'
10000000 loops, best of 3: 0.157 usec per loop
$ python2.6 -mtimeit '(1,2,3)'
10000000 loops, best of 3: 0.0527 usec per loop

I didn't do the measurements on 3.0 because of course I don't have it around -- it's totally obsolete and there is absolutely no reason to keep it around, since 3.1 is superior to it in every way (Python 2.7, if you can upgrade to it, measures as being almost 20% faster than 2.6 in each task -- and 2.6, as you see, is faster than 3.1 -- so, if you care seriously about performance, Python 2.7 is really the only release you should be going for!).

Anyway, the key point here is that, in each Python release, building a list out of constant literals is about the same speed, or slightly slower, than building it out of values referenced by variables; but tuples behave very differently -- building a tuple out of constant literals is typically three times as fast as building it out of values referenced by variables! You may wonder how this can be, right?-)

Answer: a tuple made out of constant literals can easily be identified by the Python compiler as being one, immutable constant literal itself: so it's essentially built just once, when the compiler turns the source into bytecodes, and stashed away in the "constants table" of the relevant function or module. When those bytecodes execute, they just need to recover the pre-built constant tuple -- hey presto!-)

This easy optimization cannot be applied to lists, because a list is a mutable object, so it's crucial that, if the same expression such as [1, 2, 3] executes twice (in a loop -- the timeit module makes the loop on your behalf;-), a fresh new list object is constructed anew each time -- and that construction (like the construction of a tuple when the compiler cannot trivially identify it as a compile-time constant and immutable object) does take a little while.

That being said, tuple construction (when both constructions actually have to
occur) still is about twice as fast as list construction -- and that discrepancy can be explained by the tuple's sheer simplicity, which other answers have mentioned repeatedly. But, that simplicity does not account for a speedup of six times or more, as you observe if you only compare the construction of lists and tuples with simple constant literals as their items!_)



Related Topics



Leave a reply



Submit