Swift Dictionary: Remove Time Complexity

How to minimize the time complexity of the following algo?

A name with at most 18 characters can have at most 171 (=19*18/2) non-empty substrings, including the string itself. You could compute all these substrings and put them into a hash table. Then do a second pass through the list and check if any name is in the table.

This gives a total effort of O(n) where n is the length of the list. The constant factor is pretty high, but this is irrelevant for the asymptotic complexity.

Alternatively you can do it the other way round, put all the names into the hash table and then in a second path check if any substring of a name is in the table. This uses much less memory and has the same time complexity.

What is the complexity of these Dictionary methods?

This routine, as a whole, is, effectively, O(m) time complexity, with m being the number of strings in your search.

This is because Dictionary.Contains and Dictionary.Add are both (normally) O(1) operations.

(It's slightly more complicated than that, as Dictionary.Add can be O(n) for n items in the Dictionary, but only when the dictionary capacity is small. As such, if you construct your dictionary with a large enough capacity up front, it would be O(m) for m string entries.)

That being said, if you're only using the Dictionary for existence checking, you could use a HashSet<string>. This would allow you to write:

  public void DistinctWords(String s)
{
HashSet<string> hash = new HashSet<string>(s.Split(' '));

// Use hash here...

As your dictionary is a local variable, and not stored (at least in your code), you could also use LINQ:

 var distinctWords = s.Split(' ').Distinct();

Swift Dictionary with Array of Strings remove Value

for key in dict.keys {
dict[key] = dict[key]!.filter({ $0 != "dev4"})
}

Swift's map and filter functions time complexity

All higher-order functions in the Swift standard library, such as map, flatMap/compactMap, filter and reduce have O(n) time complexity in general, since all of them work on the full collection they're called on and they visit each element exactly once, so they have linear time complexity.

Taking this into consideration, your first implementation has O(J*S) time complexity, since for every element in J you iterate through all elements of S using filter.

Your second implementation on the other hand has roughly linear time complexity depending on which String has more Characters in it, S or J, its time complexity is O(J) or O(S), since you don't have any nested loops, you only iterate through J and S sequentially.

Whether Implementation 1 or 2 is more efficient depends on the size of J and S completely.

Is there something like a reversible dictionary ?

No, there is no built in data structure that does this.

But you could easily create your own data structure that consists of two dictionaries. When you add to this "ReversableDictionary" you just make sure to also add the inverse to the second dictionary.

Then internally you can perform the look up on the appropriate of the two dictionaries by implementing your own dict.valueFromKey() and dict.keyFromValue() methods

Look up should be O(1) at the expense of using twice as much storage.

What is time and space complexity of Dictionary?

The time complexity of adding a new entry is documented under Dictionary<T>.Add():

If Count is less than the capacity, this method approaches an O(1) operation. If the capacity must be increased to accommodate the new element, this method becomes an O(n) operation, where n is Count.



Related Topics



Leave a reply



Submit