How to Remove Duplicates from a List

Removing duplicates in lists

The common approach to get a unique collection of items is to use a set. Sets are unordered collections of distinct objects. To create a set from any iterable, you can simply pass it to the built-in set() function. If you later need a real list again, you can similarly pass the set to the list() function.

The following example should cover whatever you are trying to do:

>>> t = [1, 2, 3, 1, 2, 3, 5, 6, 7, 8]
>>> list(set(t))
[1, 2, 3, 5, 6, 7, 8]
>>> s = [1, 2, 3]
>>> list(set(t) - set(s))
[8, 5, 6, 7]

As you can see from the example result, the original order is not maintained. As mentioned above, sets themselves are unordered collections, so the order is lost. When converting a set back to a list, an arbitrary order is created.

Maintaining order

If order is important to you, then you will have to use a different mechanism. A very common solution for this is to rely on OrderedDict to keep the order of keys during insertion:

>>> from collections import OrderedDict
>>> list(OrderedDict.fromkeys(t))
[1, 2, 3, 5, 6, 7, 8]

Starting with Python 3.7, the built-in dictionary is guaranteed to maintain the insertion order as well, so you can also use that directly if you are on Python 3.7 or later (or CPython 3.6):

>>> list(dict.fromkeys(t))
[1, 2, 3, 5, 6, 7, 8]

Note that this may have some overhead of creating a dictionary first, and then creating a list from it. If you don’t actually need to preserve the order, you’re often better off using a set, especially because it gives you a lot more operations to work with. Check out this question for more details and alternative ways to preserve the order when removing duplicates.


Finally note that both the set as well as the OrderedDict/dict solutions require your items to be hashable. This usually means that they have to be immutable. If you have to deal with items that are not hashable (e.g. list objects), then you will have to use a slow approach in which you will basically have to compare every item with every other item in a nested loop.

Remove duplicates from a list and remove elements at same index in another list

just do a loop and use the zip function

Script

# Input Vars
a = ['fish', 'Robert', 'dog', 'ball', 'cat', 'dog', 'Robert']
b = ['animal', 'person', 'animal', 'object', 'animal', 'animal', 'person']

# Output Vars
c = []
d = []

# Process
for e, f in zip(a, b):
if(e not in c):
c.append(e)
d.append(f)

# Print Output
print(c)
print(d)

Output

['fish', 'Robert', 'dog', 'ball', 'cat']
['animal', 'person', 'animal', 'object', 'animal']

Python: Remove duplicate lists in list of lists

Try this :

list_ = [["A"], ["B"], ["A","B"], ["B","A"], ["A","B","C"], ["B", "A", "C"]]
l = list(map(list, set(map(tuple, map(set, list_)))))

Output :

[['A', 'B'], ['B'], ['A', 'B', 'C'], ['A']]

This process goes through like :

  1. First convert each sub-list into a set. Thus ['A', 'B'] and ['B', 'A'] both are converted to {'A', 'B'}.
  2. Now convert each of them to a tuple for removing duplicate items as set() operation can not be done with set sub-items in the list.
  3. With set() operation make a list of unique tuples.
  4. Now convert each tuple items in the list into list type.

This is equivalent to :

list_ = [['A'], ['B'], ['A', 'B'], ['B', 'A'], ['A', 'B', 'C'], ['B', 'A', 'C']]
l0 = [set(i) for i in list_]
# l0 = [{'A'}, {'B'}, {'A', 'B'}, {'A', 'B'}, {'A', 'B', 'C'}, {'A', 'B', 'C'}]
l1 = [tuple(i) for i in l0]
# l1 = [('A',), ('B',), ('A', 'B'), ('A', 'B'), ('A', 'B', 'C'), ('A', 'B', 'C')]
l2 = set(l1)
# l2 = {('A', 'B'), ('A',), ('B',), ('A', 'B', 'C')}
l = [list(i) for i in l2]
# l = [['A', 'B'], ['A'], ['B'], ['A', 'B', 'C']]

Remove duplicate in a large list while keeping the named number in R

Try this:

df <- readRDS('MEPList.rds')
df1 <- as.data.frame(do.call(rbind,df))
df2 <- df1[!duplicated(df1$V1),,drop=F]

Output:

head(df2)

V1
GUE.NGL.mepid 197701
GUE.NGL.mepid.1 197533
GUE.NGL.mepid.2 197521
GUE.NGL.mepid.3 187917
GUE.NGL.mepid.4 124986
GUE.NGL.mepid.5 197529

Then you could format the rownames() to get the names.

removing duplicates of a list of sets

The best way is to convert your sets to frozensets (which are hashable) and then use set to get only the unique sets, like this

>>> list(set(frozenset(item) for item in L))
[frozenset({2, 4}),
frozenset({3, 6}),
frozenset({1, 2}),
frozenset({5, 6}),
frozenset({1, 4}),
frozenset({3, 5})]

If you want them as sets, then you can convert them back to sets like this

>>> [set(item) for item in set(frozenset(item) for item in L)]
[{2, 4}, {3, 6}, {1, 2}, {5, 6}, {1, 4}, {3, 5}]

If you want the order also to be maintained, while removing the duplicates, then you can use collections.OrderedDict, like this

>>> from collections import OrderedDict
>>> [set(i) for i in OrderedDict.fromkeys(frozenset(item) for item in L)]
[{1, 4}, {1, 2}, {2, 4}, {5, 6}, {3, 6}, {3, 5}]

Remove duplicates from a List T in C#

Perhaps you should consider using a HashSet.

From the MSDN link:

using System;
using System.Collections.Generic;

class Program
{
static void Main()
{
HashSet<int> evenNumbers = new HashSet<int>();
HashSet<int> oddNumbers = new HashSet<int>();

for (int i = 0; i < 5; i++)
{
// Populate numbers with just even numbers.
evenNumbers.Add(i * 2);

// Populate oddNumbers with just odd numbers.
oddNumbers.Add((i * 2) + 1);
}

Console.Write("evenNumbers contains {0} elements: ", evenNumbers.Count);
DisplaySet(evenNumbers);

Console.Write("oddNumbers contains {0} elements: ", oddNumbers.Count);
DisplaySet(oddNumbers);

// Create a new HashSet populated with even numbers.
HashSet<int> numbers = new HashSet<int>(evenNumbers);
Console.WriteLine("numbers UnionWith oddNumbers...");
numbers.UnionWith(oddNumbers);

Console.Write("numbers contains {0} elements: ", numbers.Count);
DisplaySet(numbers);
}

private static void DisplaySet(HashSet<int> set)
{
Console.Write("{");
foreach (int i in set)
{
Console.Write(" {0}", i);
}
Console.WriteLine(" }");
}
}

/* This example produces output similar to the following:
* evenNumbers contains 5 elements: { 0 2 4 6 8 }
* oddNumbers contains 5 elements: { 1 3 5 7 9 }
* numbers UnionWith oddNumbers...
* numbers contains 10 elements: { 0 2 4 6 8 1 3 5 7 9 }
*/

Remove duplicate dict in list in Python

Try this:

[dict(t) for t in {tuple(d.items()) for d in l}]

The strategy is to convert the list of dictionaries to a list of tuples where the tuples contain the items of the dictionary. Since the tuples can be hashed, you can remove duplicates using set (using a set comprehension here, older python alternative would be set(tuple(d.items()) for d in l)) and, after that, re-create the dictionaries from tuples with dict.

where:

  • l is the original list
  • d is one of the dictionaries in the list
  • t is one of the tuples created from a dictionary

Edit: If you want to preserve ordering, the one-liner above won't work since set won't do that. However, with a few lines of code, you can also do that:

l = [{'a': 123, 'b': 1234},
{'a': 3222, 'b': 1234},
{'a': 123, 'b': 1234}]

seen = set()
new_l = []
for d in l:
t = tuple(d.items())
if t not in seen:
seen.add(t)
new_l.append(d)

print new_l

Example output:

[{'a': 123, 'b': 1234}, {'a': 3222, 'b': 1234}]

Note: As pointed out by @alexis it might happen that two dictionaries with the same keys and values, don't result in the same tuple. That could happen if they go through a different adding/removing keys history. If that's the case for your problem, then consider sorting d.items() as he suggests.



Related Topics



Leave a reply



Submit