Python Dictionary Keys. "In" Complexity

What is the time complexity of dict.keys() in Python?

In Python 2, it's O(n), and it builds a new list. In Python 3, it's O(1), but it doesn't return a list. To draw a random element from a dict's keys, you'd need to convert it to a list, and that conversion is O(n).

It sounds like you were probably using random.choice(d.keys()) for part 3 of that problem. If so, that was O(n), and you got it wrong. You need to either implement your own hash table or maintain a separate list of elements, without sacrificing average-case O(1) insertions and deletions.

Python dictionary keys. In complexity

First, key in d.keys() is guaranteed to give you the same value as key in d for any dict d.

And the in operation on a dict, or the dict_keys object you get back from calling keys() on it (in 3.x), is not O(N), it's O(1).

There's no real "optimization" going on; it's just that using the hash is the obvious way to implement __contains__ on a hash table, just as it's the obvious way to implement __getitem__.


You may ask where this is guaranteed.

Well, it's not. Mapping Types defines dict as, basically, a hash table implementation of collections.abc.Mapping. There's nothing stopping someone from creating a hash table implementation of a Mapping, but still providing O(N) searches. But it would be extra work to make such a bad implementation, so why would they?

If you really need to prove it to yourself, you can test every implementation you care about (with a profiler, or by using some type with a custom __hash__ and __eq__ that logs calls, or…), or read the source.


In 2.x, you do not want to call keys, because that generates a list of the keys, instead of a KeysView. You could use iterkeys, but that may generate an iterator or something else that's not O(1). So, just use the dict itself as a sequence.

Even in 3.x, you don't want to call keys, because there's no need to. Iterating a dict, checking its __contains__, and in general treating it like a sequence is always equivalent to doing the same thing to its keys, so why bother? (And of course building the trivial KeyView, and accessing through it, are going to add a few nanoseconds to your running time and a few keystrokes to your program.)

(It's not quite clear that using sequence operations is equivalent for d.keys()/d.iterkeys() and d in 2.x. Other than performance issues, they are equivalent in every CPython, Jython, IronPython, and PyPy implementation, but it doesn't seem to be stated anywhere the way it is in 3.x. And it doesn't matter; just use key in d.)


While we're at it, note that this:

if(dict[key] != None):

… is not going to work. If the key is not in the dict, this will raise KeyError, not return None.

Also, you should never check None with == or !=; always use is.

You can do this with a try—or, more simply, do if dict.get(key, None) is not None. But again, there's no reason to do so. Also, that won't handle cases where None is a perfectly valid item. If that's the case, you need to do something like sentinel = object(); if dict.get(key, sentinel) is not sentinel:.


So, the right thing to write is:

if key in d:

More generally, this is not true:

I know the "in" keyword is generally O(n) (as this just translates to python iterating over an entire list and comparing each element

The in operator, like most other operators, is just a call to a __contains__ method (or the equivalent for a C/Java/.NET/RPython builtin). list implements it by iterating the list and comparing each element; dict implements it by hashing the value and looking up the hash; blist.blist implements it by walking a B+Tree; etc. So, it could be O(n), O(1), O(log n), or something completely different.

What is the complexity of calling of dict.keys() in Python 3?

In Python 3, dict.keys() returns a view object. Essentially, this is just a window directly onto the dictionary's keys. There is no looping over the hashtable to build a new object, for example. This makes calling it a constant-time, that is O(1), operation.

View objects for dictionaries are implemented starting here; the creation of new view objects uses dictview_new. All that this function does is create the new object that points back at the dictionary and increase reference counts (for garbage tracking).

In Python 2, dict.keys() returns a list object. To create this new list, Python must loop over the hashtable, putting the dictionary's keys into the list. This is implemented as the function dict_keys. The time complexity here is linear with the size of the dictionary, that is O(n), since every slot in the table must be visited.

N.B. dict.viewkeys() in Python 2 does the same as dict.keys() in Python 3.

What is the time complexity of python dict has_key() method

Short answer: worst case it is O(n). But the average case time complexity is O(1). The worst case is however very rare.

When you do a lookup, the key is first double hashed (once by the type of the key and once by the dictionary). Based on that result, we know in which bucket we have to search, and we start searching.

It is however possible that hash collissions occur: in that case multiple keys are in the same bucket, so we have to search among multiple keys. Worst case all the keys are in the same bucket, and thus we fallback on linear search.

Hash collisions (with a large amount of keys) are however very rare. Usually it is safe to assume that - regardless of the size of the dictionary - the number of keys in the same bucket will be fixed.

The fact that it is O(n) had some interesting consequences on security. Say you have a server that stores and retrieves data in a dictionary. Then of course the response time will scale with such a lookup. Now a hacker can design input in such a way that all keys are placed in the same bucket(s). As a result lookup will slow down, and eventually the server will not respond in a reasonable time anymore. That's why Python has a flag -R for hash randomization. This will change the hash function for every run and therefore make it harder for a hacker to design such input.

What is the time complexity of adding new key-value pair to a dictionary in Python?

The update function for dictionaries in Python can be called a few different ways. The first is as a mapping of keys to values followed by optional keyword arguments. The second way is a list of tuples followed by optional keyword arguments. The third is as a series of keyword arguments. We can see these function stubs in builtins.pyi:

@overload
def update(self, __m: Mapping[_KT, _VT], **kwargs: _VT) -> None: ...
@overload
def update(self, __m: Iterable[Tuple[_KT, _VT]], **kwargs: _VT) -> None: ...
@overload
def update(self, **kwargs: _VT) -> None: ...

and this more practical example:

d_obj = dict()
d_obj.update({'a':1},b=2)
d_obj.update([('c',3)],d=4)
d_obj.update(e=5)
print(d_obj)

For each of these implementations Python will iterate through each argument passed in and check if the key is in the dictionary in constant time. If the key is in the dictionary the function will set the key to its new value. If it is not then the function will add the key to the dictionary and set its value. Since all of these adding and setting operations can be assumed to be done in constant time we can assume each individual item passed in to update() is completed in constant time thus we get O(1 * n) where n is equal to the number of individual key-value pairs passed in to update(). Thus the time complexity of the update function is O(n).

What is the time complexity of searching through dictionary key values while using the key work *in*

Dictionaries are implemented as hash tables/maps, which have average O(1) performance for look ups. Internally, they have the same implementation as sets.

To further improve performance, replace

if nums[i] in cache.keys():

with

if nums[i] in cache:

Also, you can make an improvement with enumerate (which is more Pythonic):

def twoSum(self, nums, target):
cache = {}
for i, x in enumerate(nums):
b = target - x

if x in cache:
return [i, cache[x]]

cache[b] = i;

what is time complexity of python Dictionary.values()?

Q(1):I think it is O(1)

edit:I was wrong.It is O(n).Thanks to @Roy Cohen and @kaya3.

test code:

import timeit
def timeis(func):
def wrap(*args, **kwargs):
start = timeit.default_timer()
result = func(*args, **kwargs)
end = timeit.default_timer()
print(func.__name__, end-start)
return result
return wrap

import random
@timeis
def dict_values_test(dic,value):
return value in dic.values()

tiny_dic = {i : 10*i for i in range(1000)}
value = random.randint(1,1000)
dict_values_test(tiny_dic,value)
small_dic = {i : 10*i for i in range(1000000)}
value = random.randint(1,1000000)
dict_values_test(small_dic,value)
big_dic = {i : 10*i for i in range(100000000)}
value = random.randint(1,100000000)
dict_values_test(big_dic,value)

result:

dict_values_test 2.580000000002025e-05
dict_values_test 0.015847600000000073
dict_values_test 1.4836825999999999

Q(2):

code:

def find_key_by_value(dic,find_value):
return [k for k,v in dic.items() if v == find_value]

dic = {1:10,2:20,3:30,4:40,5:40}
print(find_key_by_value(dic,40))

result:

[4, 5]

Time complexity of accessing a Python dict

See Time Complexity. The python dict is a hashmap, its worst case is therefore O(n) if the hash function is bad and results in a lot of collisions. However that is a very rare case where every item added has the same hash and so is added to the same chain which for a major Python implementation would be extremely unlikely. The average time complexity is of course O(1).

The best method would be to check and take a look at the hashs of the objects you are using. The CPython Dict uses int PyObject_Hash (PyObject *o) which is the equivalent of hash(o).

After a quick check, I have not yet managed to find two tuples that hash to the same value, which would indicate that the lookup is O(1)

l = []
for x in range(0, 50):
for y in range(0, 50):
if hash((x,y)) in l:
print "Fail: ", (x,y)
l.append(hash((x,y)))
print "Test Finished"

CodePad (Available for 24 hours)



Related Topics



Leave a reply



Submit