Removing Columns That Are All 0

Remove columns with zero values from a dataframe

You almost have it. Put those two together:

 SelectVar[, colSums(SelectVar != 0) > 0]

This works because the factor columns are evaluated as numerics that are >= 1.

How do I delete a column that contains only zeros in Pandas?

df.loc[:, (df != 0).any(axis=0)]

Here is a break-down of how it works:

In [74]: import pandas as pd

In [75]: df = pd.DataFrame([[1,0,0,0], [0,0,1,0]])

In [76]: df
Out[76]:
0 1 2 3
0 1 0 0 0
1 0 0 1 0

[2 rows x 4 columns]

df != 0 creates a boolean DataFrame which is True where df is nonzero:

In [77]: df != 0
Out[77]:
0 1 2 3
0 True False False False
1 False False True False

[2 rows x 4 columns]

(df != 0).any(axis=0) returns a boolean Series indicating which columns have nonzero entries. (The any operation aggregates values along the 0-axis -- i.e. along the rows -- into a single boolean value. Hence the result is one boolean value for each column.)

In [78]: (df != 0).any(axis=0)
Out[78]:
0 True
1 False
2 True
3 False
dtype: bool

And df.loc can be used to select those columns:

In [79]: df.loc[:, (df != 0).any(axis=0)]
Out[79]:
0 2
0 1 0
1 0 1

[2 rows x 2 columns]

To "delete" the zero-columns, reassign df:

df = df.loc[:, (df != 0).any(axis=0)]

How to delete R data.frame columns with only zero values?

One option using dplyr could be:

df %>%
select(where(~ any(. != 0)))

1 0 2 2
2 2 3 5
3 5 0 1
4 7 0 2
5 2 1 3
6 3 0 4
7 0 4 5
8 3 0 6

Remove all columns or rows with only zeros out of a data frame

Using colSums():

df[, colSums(abs(df)) > 0]

i.e. a column has only zeros if and only if the sum of the absolute values is zero.

Remove Column if all values are either NA or 0 in r

Multiple ways to do this

df[colSums(is.na(df) | df == 0) != nrow(df)]

# rv X1 X2 X4
#1 M 0 110 1
#2 J 70 200 3
#3 J NA 115 4
#4 M 65 110 9
#5 J 70 200 3
#6 J 64 115 8

Using apply

df[!apply(is.na(df) | df == 0, 2, all)]

Or using dplyr

library(dplyr)
df %>% select_if(~!all(is.na(.) | . == 0))

Fast removal of only zero columns in pandas dataframe

There is a much faster way to implement that using Numba.

Indeed, most of the Numpy implementation will create huge temporary arrays that are slow to fill and read. Moreover, Numpy will iterate over the full dataframe while this is often not needed (at least in your example). The point is that you can very quickly know if you need to keep a column by just iteratively check column values and early stop the computation of the current column if there is any 0 (typically at the beginning). Moreover, there is no need to always copy the entire dataframe (using about 1.9 GiB of memory): when all the columns are kept. Finally, you can perform the computation in parallel.

However, there are performance-critical low-level catches. First, Numba cannot deal with Pandas dataframes, but the conversion to a Numpy array is almost free using df.values (the same thing applies for the creation of a new dataframe). Moreover, regarding the memory layout of the array, it could be better to iterate either over the lines or over the columns in the innermost loop.

Memory layout

This layout can be fetched by checking the strides of the input dataframe Numpy array.

Note that the example use a row-major dataframe due to the (unusual) Numpy random initialization, but most dataframes tend to be column major.

Here is an optimized implementation:

import numba as nb

@nb.njit('int_[:,:](int_[:,:])', parallel=True)
def filterNullColumns(dfValues):
n, m = dfValues.shape
s0, s1 = dfValues.strides
columnMajor = s0 < s1
toKeep = np.full(m, False, dtype=np.bool_)

# Find the columns to keep
# Only-optimized for column-major dataframes (quite complex otherwise)
for colId in nb.prange(m):
for rowId in range(n):
if dfValues[rowId, colId] != 0:
toKeep[colId] = True
break

# Optimization: no columns are discarded
if np.all(toKeep):
return dfValues

# Create a new dataframe
newColCount = np.sum(toKeep)
res = np.empty((n,newColCount), dtype=dfValues.dtype)
if columnMajor:
newColId = 0
for colId in nb.prange(m):
if toKeep[colId]:
for rowId in range(n):
res[rowId, newColId] = dfValues[rowId, colId]
newColId += 1
else:
for rowId in nb.prange(n):
newColId = 0
for colId in range(m):
res[rowId, newColId] = dfValues[rowId, colId]
newColId += toKeep[colId]
return res

result = pd.DataFrame(filterNullColumns(df.values))

Here are the result on my 6-core machine:

Reference:           1094 ms
Valdi_Bo answer: 1262 ms
This implementation: 0.056 ms (300 ms with discarded columns)

This, the implementation is about 20 000 times faster than the reference implementation on the provided example (no discarded column) and 4.2 times faster on more pathological cases (only one column discarded).

If you want to reach even faster performance, then you can perform the computation in-place (dangerous, especially due to Pandas) or use smaller datatypes (like np.uint8 or np.int16) since the computation is mainly memory-bound.

Best way to remove all columns and rows with zero sum from a pandas dataframe

here we go:

ad = np.array([[1, 0, 1, 0, 1],
[0, 0, 0, 0, 0],
[1, 1, 1, 0, 1],
[0, 1, 1, 0, 1],
[1, 1, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 1, 0, 1]])

df = pd.DataFrame(ad)

df.drop(df.loc[df.sum(axis=1)==0].index, inplace=True)
df.drop(columns=df.columns[df.sum()==0], inplace=True)

The code above will drop the row/column, when the sum of the row/column is zero. This is achived by calucating the sum along the axis 1 for rows and 0 for columns and then dropting the row/column with a sum of 0 (df.drop(...))



Related Topics



Leave a reply



Submit