Filtering the Dataframe Based on the Column Value of Another Dataframe

pandas - filter dataframe by another dataframe by row elements

You can do this efficiently using isin on a multiindex constructed from the desired columns:

df1 = pd.DataFrame({'c': ['A', 'A', 'B', 'C', 'C'],
'k': [1, 2, 2, 2, 2],
'l': ['a', 'b', 'a', 'a', 'd']})
df2 = pd.DataFrame({'c': ['A', 'C'],
'l': ['b', 'a']})
keys = list(df2.columns.values)
i1 = df1.set_index(keys).index
i2 = df2.set_index(keys).index
df1[~i1.isin(i2)]

enter image description here

I think this improves on @IanS's similar solution because it doesn't assume any column type (i.e. it will work with numbers as well as strings).


(Above answer is an edit. Following was my initial answer)

Interesting! This is something I haven't come across before... I would probably solve it by merging the two arrays, then dropping rows where df2 is defined. Here is an example, which makes use of a temporary array:

df1 = pd.DataFrame({'c': ['A', 'A', 'B', 'C', 'C'],
'k': [1, 2, 2, 2, 2],
'l': ['a', 'b', 'a', 'a', 'd']})
df2 = pd.DataFrame({'c': ['A', 'C'],
'l': ['b', 'a']})

# create a column marking df2 values
df2['marker'] = 1

# join the two, keeping all of df1's indices
joined = pd.merge(df1, df2, on=['c', 'l'], how='left')
joined

enter image description here

# extract desired columns where marker is NaN
joined[pd.isnull(joined['marker'])][df1.columns]

enter image description here

There may be a way to do this without using the temporary array, but I can't think of one. As long as your data isn't huge the above method should be a fast and sufficient answer.

Filtering the dataframe based on the column value of another dataframe

Update

You could use a simple mask:

m = df2.SKU.isin(df1.SKU)
df2 = df2[m]

You are looking for an inner join. Try this:

df3 = df1.merge(df2, on=['SKU','Sales'], how='inner')

# SKU Sales
#0 A 100
#1 B 200
#2 C 300

Or this:

df3 = df1.merge(df2, on='SKU', how='inner')

# SKU Sales_x Sales_y
#0 A 100 100
#1 B 200 200
#2 C 300 300

Filtering dataframe based on another dataframe

You can use .isin() to filter to the list of tickers available in df2.

df1_filtered = df1[df1['ticker'].isin(df2['ticker'].tolist())]

How do I filter out rows based on another data frame in Python?

Plug and play script for you. If this doesn't work on your regular code, check to make sure you have the same types in the same columns.

import pandas as pd

df1 = pd.DataFrame(
{"system": ["AIII", "CIII", "LV"], "Code": [423, 123, 142]}
)

df2 = pd.DataFrame(
{"StatusMessage": [123], "Event": ["Gearbox warm up"]}
)

### This is what you need
df1 = df1[df1.Code.isin(df2.StatusMessage.unique())]

print(df1)

How to filter a dataframe based on values in another dataframe in R

You need to group them by A and then we can use double inner_join.

Data:

df1 <- data.frame(A=c(1,5,1,5,1))
df2 <- data.frame(A=c(1,5,1,5,1), B=c(0.92,0.02,0.18,0.87,0.46))

Solution:

df1 %>%
inner_join(df2 %>%
filter(A == df1$A & B > 0.5) %>%
group_by(A)%>%
summarize(count=n())) %>%
inner_join(df2 %>%
filter(A == df1$A) %>%
group_by(A)%>%
summarize(A_count=n())) %>%
mutate(C= count/A_count) %>%
select(A,C)-> df1

Output:

 A         C
1 1 0.3333333
2 5 0.5000000
3 1 0.3333333
4 5 0.5000000
5 1 0.3333333

R: Filter a dataframe based on another dataframe

If you are only wanting to keep the rownames in e that occur in pf (or that don't occur, then use !rownames(e)), then you can just filter on the rownames:

library(tidyverse)

e %>%
filter(rownames(e) %in% rownames(pf))

Another possibility is to create a rownames column for both dataframes. Then, we can do the semi_join on the rownames (i.e., rn). Then, convert the rn column back to the rownames.

library(tidyverse)

list(e, pf) %>%
map(~ .x %>%
as.data.frame %>%
rownames_to_column('rn')) %>%
reduce(full_join, by = 'rn') %>%
column_to_rownames('rn')

Output

        JHU_113_2.CEL JHU_144.CEL JHU_173.CEL JHU_176R.CEL JHU_182.CEL JHU_186.CEL JHU_187.CEL JHU_188.CEL JHU_203.CEL
2315374 6.28274 6.79161 6.11265 6.13997 6.68056 6.48156 6.45415 6.04542 5.99176
2315376 5.81678 5.71165 6.02794 5.37082 5.95527 5.75999 5.87863 5.54830 6.35571
2315587 8.88557 8.95699 8.36898 8.28993 8.41361 8.64980 8.74305 8.31915 8.43548
2315588 6.28650 6.66750 6.07503 6.76625 6.19819 6.84260 6.13916 6.40219 6.45059
2315591 6.97515 6.61705 6.51994 6.74982 6.60917 6.55182 6.62240 6.44394 5.76592
2315595 5.94179 5.39178 5.09497 4.96199 2.96431 4.95204 5.00979 4.06493 5.38048
2315598 4.99420 5.56888 5.57912 5.43960 5.19249 5.87991 5.60540 5.09513 5.43618
2315603 7.67845 7.90005 7.47594 6.75087 7.62805 8.00069 7.34296 6.81338 7.52014
2315604 6.20952 6.59687 6.14608 5.70518 6.49572 6.12622 6.23690 6.39569 6.70869
2315640 5.85307 6.07303 6.41875 6.07282 6.28283 6.13699 6.16377 6.48616 6.34162

How do you filter dataframe based off value of column in another dataframe and whether the string of a column in that dataframe is a substring?

Try:

# convert the columns to datetime (skip if they are converted already):
df1.minyear = pd.to_datetime(df1.minyear)
df2.date = pd.to_datetime(df2.date)

x = df1.merge(df2, how="cross")
x["tmp"] = x.apply(lambda r: r["id"] in r["id2"], axis=1)
x = x[x.tmp & x.date.ge(x.minyear)][["id2", "date"]]
print(x)

Prints:

   id2       date
1 xx 2019-01-02
2 xx 2020-01-01
12 yy 2019-01-03
13 yy 2020-01-04
23 zz 2020-01-05


Related Topics



Leave a reply



Submit