Unique on a dataframe with only selected columns
Ok, if it doesn't matter which value in the non-duplicated column you select, this should be pretty easy:
dat <- data.frame(id=c(1,1,3),id2=c(1,1,4),somevalue=c("x","y","z"))
> dat[!duplicated(dat[,c('id','id2')]),]
id id2 somevalue
1 1 1 x
3 3 4 z
Inside the duplicated
call, I'm simply passing only those columns from dat
that I don't want duplicates of. This code will automatically always select the first of any ambiguous values. (In this case, x.)
df.unique() on whole DataFrame based on a column
It seems you need DataFrame.drop_duplicates
with parameter subset
which specify where are test duplicates:
#keep first duplicate value
df = df.drop_duplicates(subset=['Id'])
print (df)
Id Type
Index
0 a1 A
1 a2 A
2 b1 B
3 b3 B
#keep last duplicate value
df = df.drop_duplicates(subset=['Id'], keep='last')
print (df)
Id Type
Index
1 a2 A
2 b1 B
3 b3 B
4 a1 A
#remove all duplicate values
df = df.drop_duplicates(subset=['Id'], keep=False)
print (df)
Id Type
Index
1 a2 A
2 b1 B
3 b3 B
Select only columns that have at most N unique values
nunique
can be called on the entire DataFrame (you have to call it). You can then filter out columns using loc
:
df.loc[:, df.nunique() < 32]
Minimal Verifiable Example
df = pd.DataFrame({'A': list('abbcde'), 'B': list('ababab')})
df
A B
0 a a
1 b b
2 b a
3 c b
4 d a
5 e b
df.nunique()
A 5
B 2
dtype: int64
df.loc[:, df.nunique() < 3]
B
0 a
1 b
2 a
3 b
4 a
5 b
unique combinations of values in selected columns in pandas data frame and count
You can groupby
on cols 'A' and 'B' and call size
and then reset_index
and rename
the generated column:
In [26]:
df1.groupby(['A','B']).size().reset_index().rename(columns={0:'count'})
Out[26]:
A B count
0 no no 1
1 no yes 2
2 yes no 4
3 yes yes 3
update
A little explanation, by grouping on the 2 columns, this groups rows where A and B values are the same, we call size
which returns the number of unique groups:
In[202]:
df1.groupby(['A','B']).size()
Out[202]:
A B
no no 1
yes 2
yes no 4
yes 3
dtype: int64
So now to restore the grouped columns, we call reset_index
:
In[203]:
df1.groupby(['A','B']).size().reset_index()
Out[203]:
A B 0
0 no no 1
1 no yes 2
2 yes no 4
3 yes yes 3
This restores the indices but the size aggregation is turned into a generated column 0
, so we have to rename this:
In[204]:
df1.groupby(['A','B']).size().reset_index().rename(columns={0:'count'})
Out[204]:
A B count
0 no no 1
1 no yes 2
2 yes no 4
3 yes yes 3
groupby
does accept the arg as_index
which we could have set to False
so it doesn't make the grouped columns the index, but this generates a series
and you'd still have to restore the indices and so on....:
In[205]:
df1.groupby(['A','B'], as_index=False).size()
Out[205]:
A B
no no 1
yes 2
yes no 4
yes 3
dtype: int64
Create a dataframe with all observations unique for one specific column of a dataframe in R
base R
dat[!duplicated(dat$a),]
# a b c d e
# 1 1 2 3 4 5
# 3 3 4 5 6 8
# 4 4 5 2 3 6
dplyr
dplyr::distinct(dat, a, .keep_all = TRUE)
# a b c d e
# 1 1 2 3 4 5
# 2 3 4 5 6 8
# 3 4 5 2 3 6
Another option: per-group, pick a particular value from the duplicated rows.
library(dplyr)
dat %>%
group_by(a) %>%
slice(which.max(e)) %>%
ungroup()
# # A tibble: 3 x 5
# a b c d e
# <int> <int> <int> <int> <int>
# 1 1 2 3 4 6
# 2 3 4 5 6 8
# 3 4 5 2 3 6
library(data.table)
as.data.table(dat)[, .SD[which.max(e),], by = .(a) ]
# a b c d e
# <int> <int> <int> <int> <int>
# 1: 1 2 3 4 6
# 2: 3 4 5 6 8
# 3: 4 5 2 3 6
As for unique
, it does not have incomparables
argument, but it is not yet implemented:
unique(dat, incomparables = c("b", "c", "d", "e"))
# Error: argument 'incomparables != FALSE' is not used (yet)
How to select distinct across multiple data frame columns in pandas?
You can use the drop_duplicates
method to get the unique rows in a DataFrame:
In [29]: df = pd.DataFrame({'a':[1,2,1,2], 'b':[3,4,3,5]})
In [30]: df
Out[30]:
a b
0 1 3
1 2 4
2 1 3
3 2 5
In [32]: df.drop_duplicates()
Out[32]:
a b
0 1 3
1 2 4
3 2 5
You can also provide the subset
keyword argument if you only want to use certain columns to determine uniqueness. See the docstring.
Selecting multiple columns in a Pandas dataframe
The column names (which are strings) cannot be sliced in the manner you tried.
Here you have a couple of options. If you know from context which variables you want to slice out, you can just return a view of only those columns by passing a list into the __getitem__
syntax (the []'s).
df1 = df[['a', 'b']]
Alternatively, if it matters to index them numerically and not by their name (say your code should automatically do this without knowing the names of the first two columns) then you can do this instead:
df1 = df.iloc[:, 0:2] # Remember that Python does not slice inclusive of the ending index.
Additionally, you should familiarize yourself with the idea of a view into a Pandas object vs. a copy of that object. The first of the above methods will return a new copy in memory of the desired sub-object (the desired slices).
Sometimes, however, there are indexing conventions in Pandas that don't do this and instead give you a new variable that just refers to the same chunk of memory as the sub-object or slice in the original object. This will happen with the second way of indexing, so you can modify it with the .copy()
method to get a regular copy. When this happens, changing what you think is the sliced object can sometimes alter the original object. Always good to be on the look out for this.
df1 = df.iloc[0, 0:2].copy() # To avoid the case where changing df1 also changes df
To use iloc
, you need to know the column positions (or indices). As the column positions may change, instead of hard-coding indices, you can use iloc
along with get_loc
function of columns
method of dataframe object to obtain column indices.
{df.columns.get_loc(c): c for idx, c in enumerate(df.columns)}
Now you can use this dictionary to access columns through names and using iloc
.
Subset with unique cases, based on multiple columns
You can use the duplicated()
function to find the unique combinations:
> df[!duplicated(df[1:3]),]
v1 v2 v3 v4 v5
1 7 1 A 100 98
2 7 2 A 98 97
3 8 1 C NA 80
6 9 3 C 75 75
To get only the duplicates, you can check it in both directions:
> df[duplicated(df[1:3]) | duplicated(df[1:3], fromLast=TRUE),]
v1 v2 v3 v4 v5
3 8 1 C NA 80
4 8 1 C 78 75
5 8 1 C 50 62
Extracting specific selected columns to new DataFrame as a copy
There is a way of doing this and it actually looks similar to R
new = old[['A', 'C', 'D']].copy()
Here you are just selecting the columns you want from the original data frame and creating a variable for those. If you want to modify the new dataframe at all you'll probably want to use .copy()
to avoid a SettingWithCopyWarning
.
An alternative method is to use filter
which will create a copy by default:
new = old.filter(['A','B','D'], axis=1)
Finally, depending on the number of columns in your original dataframe, it might be more succinct to express this using a drop
(this will also create a copy by default):
new = old.drop('B', axis=1)
Related Topics
Dplyr: Nonstandard Column Names (White Space, Punctuation, Starts With Numbers)
Displaying Text Below the Plot Generated by Ggplot2
Subset a Dataframe Between 2 Dates
Dummy Variables from a String Variable
Replace X-Axis With Own Values
Extract Regression Coefficient Values
Ggplot Legends - Change Labels, Order and Title
Aggregate a Dataframe on a Given Column and Display Another Column
Nested Facets in Ggplot2 Spanning Groups
Combining Paste() and Expression() Functions in Plot Labels
Pass a Vector of Variables into Lm() Formula
Creating Arbitrary Panes in Ggplot2
Aggregate Multiple Columns At Once
Dplyr: Inner_Join With a Partial String Match
Coalesce Two String Columns With Alternating Missing Values to One