List Comprehension VS Generator Expression's Weird Timeit Results

List comprehension vs generator expression's weird timeit results?

Expanding on Paulo's answer, generator expressions are often slower than list comprehensions because of the overhead of function calls. In this case, the short-circuiting behavior of in offsets that slowness if the item is found fairly early, but otherwise, the pattern holds.

I ran a simple script through the profiler for a more detailed analysis. Here's the script:

lis=[['a','b','c'],['d','e','f'],[1,2,3],[4,5,6],
[7,8,9],[10,11,12],[13,14,15],[16,17,18]]

def ge_d():
return 'd' in (y for x in lis for y in x)
def lc_d():
return 'd' in [y for x in lis for y in x]

def ge_11():
return 11 in (y for x in lis for y in x)
def lc_11():
return 11 in [y for x in lis for y in x]

def ge_18():
return 18 in (y for x in lis for y in x)
def lc_18():
return 18 in [y for x in lis for y in x]

for i in xrange(100000):
ge_d()
lc_d()
ge_11()
lc_11()
ge_18()
lc_18()

Here are the relevant results, reordered to make the patterns clearer.

         5400002 function calls in 2.830 seconds

Ordered by: standard name

ncalls tottime percall cumtime percall filename:lineno(function)
100000 0.158 0.000 0.251 0.000 fop.py:3(ge_d)
500000 0.092 0.000 0.092 0.000 fop.py:4(<genexpr>)
100000 0.285 0.000 0.285 0.000 fop.py:5(lc_d)

100000 0.356 0.000 0.634 0.000 fop.py:8(ge_11)
1800000 0.278 0.000 0.278 0.000 fop.py:9(<genexpr>)
100000 0.333 0.000 0.333 0.000 fop.py:10(lc_11)

100000 0.435 0.000 0.806 0.000 fop.py:13(ge_18)
2500000 0.371 0.000 0.371 0.000 fop.py:14(<genexpr>)
100000 0.344 0.000 0.344 0.000 fop.py:15(lc_18)

Creating a generator expression is equivalent to creating a generator function and calling it. That accounts for one call to <genexpr>. Then, in the first case, next is called 4 times, until d is reached, for a total of 5 calls (times 100000 iterations = ncalls = 500000). In the second case, it is called 17 times, for a total of 18 calls; and in the third, 24 times, for a total of 25 calls.

The genex outperforms the list comprehension in the first case, but the extra calls to next account for most of the difference between the speed of the list comprehension and the speed of the generator expression in the second and third cases.

>>> .634 - .278 - .333
0.023
>>> .806 - .371 - .344
0.091

I'm not sure what accounts for the remaining time; it seems that generator expressions would be a hair slower even without the additional function calls. I suppose this confirms inspectorG4dget's assertion that "creating a generator comprehension has more native overhead than does a list comprehension." But in any case, this shows pretty clearly that generator expressions are slower mostly because of calls to next.

I'll add that when short-circuiting doesn't help, list comprehensions are still faster, even for very large lists. For example:

>>> counter = itertools.count()
>>> lol = [[counter.next(), counter.next(), counter.next()]
for _ in range(1000000)]
>>> 2999999 in (i for sublist in lol for i in sublist)
True
>>> 3000000 in (i for sublist in lol for i in sublist)
False
>>> %timeit 2999999 in [i for sublist in lol for i in sublist]
1 loops, best of 3: 312 ms per loop
>>> %timeit 2999999 in (i for sublist in lol for i in sublist)
1 loops, best of 3: 351 ms per loop
>>> %timeit any([2999999 in sublist for sublist in lol])
10 loops, best of 3: 161 ms per loop
>>> %timeit any(2999999 in sublist for sublist in lol)
10 loops, best of 3: 163 ms per loop
>>> %timeit for i in [2999999 in sublist for sublist in lol]: pass
1 loops, best of 3: 171 ms per loop
>>> %timeit for i in (2999999 in sublist for sublist in lol): pass
1 loops, best of 3: 183 ms per loop

As you can see, when short circuiting is irrelevant, list comprehensions are consistently faster even for a million-item-long list of lists. Obviously for actual uses of in at these scales, generators will be faster because of short-circuiting. But for other kinds of iterative tasks that are truly linear in the number of items, list comprehensions are pretty much always faster. This is especially true if you need to perform multiple tests on a list; you can iterate over an already-built list comprehension very quickly:

>>> incache = [2999999 in sublist for sublist in lol]
>>> get_list = lambda: incache
>>> get_gen = lambda: (2999999 in sublist for sublist in lol)
>>> %timeit for i in get_list(): pass
100 loops, best of 3: 18.6 ms per loop
>>> %timeit for i in get_gen(): pass
1 loops, best of 3: 187 ms per loop

In this case, the list comprehension is an order of magnitude faster!

Of course, this only remains true until you run out of memory. Which brings me to my final point. There are two main reasons to use a generator: to take advantage of short circuiting, and to save memory. For very large seqences/iterables, generators are the obvious way to go, because they save memory. But if short-circuiting is not an option, you pretty much never choose generators over lists for speed. You chose them to save memory, and it's always a trade-off.

Why is summing list comprehension faster than generator expression?

I took a look at the disassembly of each construct (using dis). I did this by declaring these two functions:

def list_comprehension():
return sum([ch in A for ch in B])

def generation_expression():
return sum(ch in A for ch in B)

and then calling dis.dis with each function.

For the list comprehension:

 0 BUILD_LIST               0
2 LOAD_FAST 0 (.0)
4 FOR_ITER 12 (to 18)
6 STORE_FAST 1 (ch)
8 LOAD_FAST 1 (ch)
10 LOAD_GLOBAL 0 (A)
12 COMPARE_OP 6 (in)
14 LIST_APPEND 2
16 JUMP_ABSOLUTE 4
18 RETURN_VALUE

and for the generator expression:

 0 LOAD_FAST                0 (.0)
2 FOR_ITER 14 (to 18)
4 STORE_FAST 1 (ch)
6 LOAD_FAST 1 (ch)
8 LOAD_GLOBAL 0 (A)
10 COMPARE_OP 6 (in)
12 YIELD_VALUE
14 POP_TOP
16 JUMP_ABSOLUTE 2
18 LOAD_CONST 0 (None)
20 RETURN_VALUE

The disassembly for the actual summation is:

 0 LOAD_GLOBAL              0 (sum)
2 LOAD_CONST 1 (<code object <genexpr> at 0x7f49dc395240, file "/home/mishac/dev/python/kintsugi/KintsugiModels/automated_tests/a.py", line 12>)
4 LOAD_CONST 2 ('generation_expression.<locals>.<genexpr>')
6 MAKE_FUNCTION 0
8 LOAD_GLOBAL 1 (B)
10 GET_ITER
12 CALL_FUNCTION 1
14 CALL_FUNCTION 1
16 RETURN_VALUE

but this sum disassembly was constant between both your examples, with the only difference being the loading of generation_expression.<locals>.<genexpr> vs list_comprehension.<locals>.<listcomp> (so just loading a different local variable).

The differing bytecode instructions between the first two disassemblies are LIST_APPEND for the list comprehension vs. the conjunction of YIELD_VALUE and POP_TOP for the generator expression.

I won't pretend I know the intrinsics of Python bytecode, but what I gather from this is that the generator expression is implemented as a queue, where the value is generated and then popped. This popping doesn't have to happen in a list comprehension, leading me to believe there'll be a slight amount of overhead in using generators.

Now this doesn't mean that generators are always going to be slower. Generators excel at being memory-efficient, so there will be a threshold N such that list comprehensions will perform slightly better before this threshold (because memory use won't be a problem), but after this threshold, generators will significantly perform better.

List vs generator comprehension speed with join function

The str.join method converts its iterable parameter to a list if it's not a list or tuple already. This lets the joining logic iterate over the items multiple times (it makes one pass to calculate the size of the result string, then a second pass to actually copy the data).

You can see this in the CPython source code:

PyObject *
PyUnicode_Join(PyObject *separator, PyObject *seq)
{
/* lots of variable declarations at the start of the function omitted */

fseq = PySequence_Fast(seq, "can only join an iterable");

/* ... */
}

The PySequence_Fast function in the C API does just what I described. It converts an arbitrary iterable into a list (essentially by calling list on it), unless it already is a list or tuple.

The conversion of the generator expression to a list means that the usual benefits of generators (a smaller memory footprint and the potential for short-circuiting) don't apply to str.join, and so the (small) additional overhead that the generator has makes its performance worse.

generator vs list comprehension

in against a generator expression will make use of the __iter__() method and iterate the expression until a match is found, making it more efficient in the general case than the list comprehension, which produces the whole list first before scanning the result for a match.

The alternative for your specific example would be to use any(), to make the test more explicit. I find this to be a tad more readable:

any(x[0] == 3 for x in l)

You do have to take into account that in does forward the generator; you cannot use this method if you need to use the generator elsewhere as well.

As for your specific timing tests; your 'short' tests are fatally flawed. The first iteration the izip() generator will be entirely exhausted, making the other 9999 iterations test against an empty generator. You are testing the difference between creating an empty list and an empty generator there, amplifying the creation cost difference.

Moreover, you should use the timeit module to run tests, making sure that the test is repeatable. This means you have to create a new izip() object each iteration too; now the contrast is much larger:

>>> # Python 2, 'short'
...
>>> timeit.timeit("l = izip(xrange(10**2), xrange(10**2)); 3 not in (x[0] for x in l)", 'from itertools import izip', number=100000)
0.27606701850891113
>>> timeit.timeit("l = izip(xrange(10**2), xrange(10**2)); 3 not in [x[0] for x in l]", 'from itertools import izip', number=100000)
1.7422130107879639
>>> # Python 2, 'long'
...
>>> timeit.timeit("l = izip(xrange(10**3), xrange(10**3)); 3 not in (x[0] for x in l)", 'from itertools import izip', number=100000)
0.3002200126647949
>>> timeit.timeit("l = izip(xrange(10**3), xrange(10**3)); 3 not in [x[0] for x in l]", 'from itertools import izip', number=100000)
15.624258995056152

and on Python 3:

>>> # Python 3, 'short'
...
>>> timeit.timeit("l = zip(range(10**2), range(10**2)); 3 not in (x[0] for x in l)", number=100000)
0.2624585109297186
>>> timeit.timeit("l = zip(range(10**2), range(10**2)); 3 not in [x[0] for x in l]", number=100000)
1.5555254180217162
>>> # Python 3, 'long'
...
>>> timeit.timeit("l = zip(range(10**3), range(10**3)); 3 not in (x[0] for x in l)", number=100000)
0.27222433499991894
>>> timeit.timeit("l = zip(range(10**3), range(10**3)); 3 not in [x[0] for x in l]", number=100000)
15.76974998600781

In all cases, the generator variant is far faster; you have to shorten the 'short' version to just 8 tuples for the list comprehension to start to win:

>>> timeit.timeit("n = 8; l = izip(xrange(n), xrange(n)); 3 not in (x[0] for x in l)", 'from itertools import izip', number=100000)
0.2870941162109375
>>> timeit.timeit("n = 8; l = izip(xrange(n), xrange(n)); 3 not in [x[0] for x in l]", 'from itertools import izip', number=100000)
0.28503894805908203

On Python 3, where the implementations of generator expressions and list comprehensions were brought closer, you have to go down to 4 items before the list comprehension wins:

>>> timeit.timeit("n = 4; l = zip(range(n), range(8)); 3 not in (x[0] for x in l)", number=100000)
0.284480107948184
>>> timeit.timeit("n = 4; l = zip(range(n), range(8)); 3 not in [x[0] for x in l]", number=100000)
0.23570425796788186

Why is updating a list faster when using a list comprehension as opposed to a generator expression?

This answer concerns CPython implementation only. Using a list comprehension is faster, since the generator is first converted into a list anyway. This is done because the length of the sequence should be determined before proceeding to replace data, and a generator can't tell you its length.

For list slice assignment, this operation is handled by the amusingly named list_ass_slice. There is a special-case handling for assigning a list or tuple, here - they can use PySequence_Fast ops.

This is the v3.7.4 implementation of PySequence_Fast, where you can clearly see a type-check for list or tuples:

PyObject *
PySequence_Fast(PyObject *v, const char *m)
{
PyObject *it;

if (v == NULL) {
return null_error();
}

if (PyList_CheckExact(v) || PyTuple_CheckExact(v)) {
Py_INCREF(v);
return v;
}

it = PyObject_GetIter(v);
if (it == NULL) {
if (PyErr_ExceptionMatches(PyExc_TypeError))
PyErr_SetString(PyExc_TypeError, m);
return NULL;
}

v = PySequence_List(it);
Py_DECREF(it);

return v;
}

A generator expression will fail this type check and continue to the fallback code, where it is converted into a list object, so that the length can be predetermined.

In the general case, a predetermined length is desirable in order to allow efficient allocation of list storage, and also to provide useful error messages with extended slice assignment:

>>> vals = (x for x in 'abc')
>>> L = [1,2,3]
>>> L[::2] = vals # attempt assigning 3 values into 2 positions
---------------------------------------------------------------------------
Traceback (most recent call last)
...
ValueError: attempt to assign sequence of size 3 to extended slice of size 2
>>> L # data unchanged
[1, 2, 3]
>>> list(vals) # generator was fully consumed
[]

Python list comprehension vs generator

In the simple case, it will be fastest to do this without a comprehension/generator:

sum(xrange(9999999))

Normally, if I need to do some sort of operation where I need to choose between a comprehension and generator expression, I do:

sum(a*b for a, b in zip(c, d))

Personally, I think that the generator expression (without the extra parenthesis1) looks nicer and since readability counts -- This outweighs any micro performance differences between the two expressions.

Generators will frequently be faster for things like this because they avoid creating an intermediate list (and the memory allocation associated with it). The timing difference is probably more pronounced as the list gets bigger as the memory allocation and list resizing take more time for bigger lists. This isn't always the case however (It is well documented on StackOverflow that str.join works faster with lists than with generators in CPython because when str.join gets a generator, it constructs the list anyway...).

1You can omit the parenthesis any time you are passing a generator expression to a function as the only argument -- Which happens more frequently than you might expect...

Why is this genexp performing worse than a list comprehension?

When essentially unlimited memory is available (which will invariably be the case in tiny benchmarks, although often not in real-world problems!-), lists will tend to outperform generators because they can get allocated just once, in one "big bunch" (no memory fragmentation, etc), while generators require (internally) extra effort to avoid that "big bunch" approach by preserving the stack-frame state to allow resumption of execution.

Whether a list-approach or generator-approach will be faster in a real program depends on the exact memory situation, including fragmentation, which is about impossible to reproduce accurately in a "micro-benchmark". IOW, in the end, if you truly care about performance, you must carefully benchmark (and, separately, profile) your actual program(s), not just "toy" micro-benchmarks, in the general case.



Related Topics



Leave a reply



Submit