Efficient Row-Wise Operations on a Data.Table

Efficient row-wise operations on a data.table

Or, just pmin.

my.dt <- as.data.table(my.df)
system.time(my.dt[,row.min:=pmin(A,B,C)])
# user system elapsed
# 0.02 0.00 0.01
head(my.dt)
# A B C row.min
# [1,] 2 5 2 2
# [2,] 4 5 5 4
# [3,] 1 5 3 1
# [4,] 4 4 3 3
# [5,] 3 5 5 3
# [6,] 1 5 3 1

data.table: Perform efficient row-wise operation on large data.table with columns as input

This is difficult. strsplit will not be very memory efficient for this 100 million dataset - each row requires two lists to be made from strsplit. My suggestion is to use a function and skip the by = 1:.N step.

exposed = function(before, after) {
out = vector(length = length(before))
for (i in seq_along(before)) {
bef = before[i]
aft = after[i]
if (bef == "NONE" || aft == "NONE")
out[i] = FALSE
else
out[i] = any(!unlist(strsplit(aft, "[+]", fixed = TRUE), use.names = FALSE)%chin%unlist(strsplit(bef, "[+]", fixed = TRUE), use.names = FALSE))
}
return(out)
}

DT[, TI3 := exposed(exposure.before.index, exposure)]

> DT[, .(exposure.before.index, exposure, TI, TI3)]
exposure.before.index exposure TI TI3
1: drugA drugA FALSE FALSE
2: drugA drugA+drugB TRUE TRUE
3: drugA drugA+drugB+drugC TRUE TRUE
4: drugB drugB FALSE FALSE
5: drugB drugC TRUE TRUE
6: NONE NONE FALSE FALSE
7: NONE NONE FALSE FALSE

Note there are a few optimizations here:

  1. Using %chin% instead of %in% which is a data.table utility function that is faster on character vectors than %in%
  2. Using strsplit(..., fixed = TRUE) to optimize - this isn't a regular expression we are using. Likely the biggest performance boost.
  3. unlist(..., use.names = FALSE)

The next step would be to turn the function into an Rcpp which is not done here. Strings are more complicated than numbers in Rcpp (at least for me).

Here's the performance of this function. For the 7 row example, this is 4 times faster. But as we increase the rows, the speed difference becomes less significant:

## 7 rows
Unit: microseconds
expr min lq mean median uq max
use_fx 375.801 395.251 662.582 409.751 431.351 21345.701
OP 1889.901 2021.601 2211.858 2096.101 2285.201 4042.801

## 700,000 rows
Unit: seconds
expr min lq mean median uq max
use_fx 4.409595 4.409595 4.409595 4.409595 4.409595 4.409595
OP 12.592520 12.592520 12.592520 12.592520 12.592520 12.592520

## 7,000,000 rows
Unit: seconds
expr min lq mean median uq max
use_fx 43.90979 43.90979 43.90979 43.90979 43.90979 43.90979
OP 130.16418 130.16418 130.16418 130.16418 130.16418 130.16418

## code used:
DT_big = DT[rep(seq_len(.N), 1e5)]
microbenchmark(
use_fx = DT_big[, TI3 := exposed(exposure.before.index, exposure)],
OP = {
DT_big[,CNT:=1:.N]
DT_big[!(exposure.before.index!="NONE" & exposure=="NONE"),TI:=(any(!unlist(strsplit(exposure, "[+]")) %in% unlist(strsplit(exposure.before.index, "[+]")))),by="CNT"]
DT_big[is.na(TI),TI:=FALSE]
}
, times = 1L
)

If you are interested in Rcpp, this may be helpful:

https://wckdouglas.github.io/2015/05/string-manipulation

Preferred performant procedure for R data.table row-wise operations?

I think you can use matrix multiplication and other vectorization techniques to simplify your code, which helps you avoid running function logpost in a row-wise manner.


Below is a vectorized version of logpost, i.e., logpost2

logpost2 <- function(d, dd, mub = 1, taub = 10, a = 0.5, z = 0.7) {
bmat <- as.matrix(dd[, .(b1, b2, b3)])
xmat <- cbind(1, as.matrix(d[, .(x1, x2)]))
phi <- dd$phi
phi_log <- log(phi)
lp <- -(a + nrow(d) + 1) * phi_log -
(1 / (2 * phi^2)) * colSums((d$y - tcrossprod(xmat, bmat))^2) -
(1 / (2 * taub^2)) * rowSums((bmat - mub)^2) - (z / phi)
lp
}

and you will see

> start <- Sys.time()

> grid[, lp := logpost2(d, .SD)]

> difftime(Sys.time(), start)
Time difference of 0.1966231 secs

and

> head(grid)
b1 b2 b3 phi id lp
1: 0.00 1 -1.5 0.4 1 -398.7618
2: 0.05 1 -1.5 0.4 2 -380.3674
3: 0.10 1 -1.5 0.4 3 -363.5356
4: 0.15 1 -1.5 0.4 4 -348.2663
5: 0.20 1 -1.5 0.4 5 -334.5595
6: 0.25 1 -1.5 0.4 6 -322.4152

Row operations in data.table using `by = .I`

UPDATE:

Since data.table version 1.4.3 or later, by=.I has been implemented to work as expected by OP for row-wise grouping. Note using by=.I will create a new column in the data.table called I that has the row numbers. The row number column can then be kept or deleted according to preference.

The following parts of this answer records an earlier version that pertains to older versions of data.table. I keep it here for reference in case someone still uses legacy versions.


Note: section (3) of this answer updated in April 2019, due to many changes in data.table over time redering the original version obsolete. Also, use of the argument with= removed from all instances of data.table, as it has since been deprecated.

1) Well, one reason not to use it, at least for the rowsums example is performance, and creation of an unnecessary column. Compare to option f2 below, which is almost 4x faster and does not need the rowpos column (Note that the original question used rowSums as the example function, to which this part of the answer responds. OP edited the question afterwards to use a different function, for which part 3 of this answer is more relevant`):

dt <- data.table(V0 =LETTERS[c(1,1,2,2,3)], V1=1:5, V2=3:7, V3=5:1)
f1 <- function(dt){
dt[, rowpos := .I]
dt[ , sdd := rowSums(.SD[, 2:4]), by = rowpos ] }
f2 <- function(dt) dt[, sdd := rowSums(.SD), .SDcols= 2:4]

library(microbenchmark)
microbenchmark(f1(dt),f2(dt))
# Unit: milliseconds
# expr min lq mean median uq max neval cld
# f1(dt) 3.669049 3.732434 4.013946 3.793352 3.972714 5.834608 100 b
# f2(dt) 1.052702 1.085857 1.154132 1.105301 1.138658 2.825464 100 a

2) On your second question, although dt[, sdd := sum(.SD[, 2:4]), by = .I] does not work, dt[, sdd := sum(.SD[, 2:4]), by = 1:NROW(dt)] works perfectly. Given that according to ?data.table ".I is an integer vector equal to seq_len(nrow(x))", one might expect these to be equivalent. The difference, however, is that .I is for use in j, not in by. NB the value of .I is calculated internally in data.table, so is not available beforehand to be passed in as a parameter value as in by=.I.

It might also be expected that by = .I should just throw an error. But this does not occur, because loading the data.table package creates an object .I in the data.table namespace that is accessible from the global environment, and whose value is NULL. You can test this by typing .I at the command prompt. (Note, the same applies to .SD, .EACHI, .N, .GRP, and .BY)

.I
# Error: object '.I' not found
library(data.table)
.I
# NULL
data.table::.I
# NULL

The upshot of this is that the behaviour of by = .I is equivalent to by = NULL.

3) Although we have already seen in part 1 that in the case of rowSums, which already loops row-wise efficiently, there are much faster ways than creating the rowpos column. But what about looping when we don't have a fast row-wise function?

Benchmarking the by = rowpos and by = 1:NROW(dt) versions against a for loop with set() is informative here. We find that looping over set in a for loop is slower than either of the methods that use data.table's by argument for looping. However there is neglibible difference in timing between the by loop that creates an additional column and the one that uses seq_len(NROW(dt)). Absent any performance difference, it seems that f.nrow is probably preferable, but only on the basis of being more concise and not creating an unnecessary column

dt <- data.table(V0 = rep(LETTERS[c(1,1,2,2,3)], 1e3), V1=1:5, V2=3:7, V3=5:1)

f.rowpos <- function() {
dt[, rowpos := .I]
dt[, sdd := sum(.SD[, 2:4]), by = rowpos ]
}

f.nrow <- function() {
dt[, sdd := sum(.SD[, 2:4]), by = seq_len(NROW(dt)) ]
}

f.forset<- function() {
for (i in seq_len(NROW(dt))) set(dt, i, 'sdd', sum(dt[i, 2:4]))
}

microbenchmark(f.rowpos(),f.nrow(), f.forset(), times = 5)
# Unit: milliseconds
# expr min lq mean median uq max neval
# f.rowpos() 559.1115 575.3162 580.2853 578.6865 588.5532 599.7591 5
# f.nrow() 558.4327 582.4434 584.6893 587.1732 588.6689 606.7282 5
# f.forset() 1172.6560 1178.8399 1298.4842 1255.4375 1292.7393 1592.7486 5

So, in conclusion, even in situations where there is not an optimised function such as rowSums that already operates by row, there are alternatives to using a rowpos column that, although not faster, don't require creation of a redundant column.

Perform row-wise operation in datatable with multiple elements

You can do this:

DT[, .(.BY$x+DT[group==.BY$group,y]), by=.(x,group)]

This returns N rows per x, where N is the size of x's group. We leverage the special (.BY), which is available in j when utilizing by. Basically, .BY is a named list, containing the values of the grouping variables. Here, I'm adding the value of x (.BY$x) to the vector of y values from the subset of DT where the group is equal to the current group value (.BY$group)

Output:

          x  group    V1
<int> <char> <int>
1: 68 A 144
2: 68 A 107
3: 68 A 92
4: 68 A 121
5: 68 A 160
---
4996: 4 B 25
4997: 4 B 66
4998: 4 B 83
4999: 4 B 27
5000: 4 B 68

You can also accomplish this via a join:

DT[,!c("y")][DT[, .(y,group)], on=.(group), allow.cartesian=T][, total:=x+y][order(ind)]

Output:

        ind     x  group     y total
<int> <int> <char> <int> <int>
1: 1 68 A 76 144
2: 1 68 A 39 107
3: 1 68 A 24 92
4: 1 68 A 53 121
5: 1 68 A 92 160
---
4996: 100 4 B 21 25
4997: 100 4 B 62 66
4998: 100 4 B 79 83
4999: 100 4 B 23 27
5000: 100 4 B 64 68

Row-wise manipulation on large dataset

Update: A (much) faster solution would be to play with the indices as follows (takes about 4 seconds on 1e6*7):

ll <- vector("list", 3)
ll[[1]] <- copy(dt[, -1])
d_idx <- seq(2, ncol(dt), by=3)
for (j in 1:2) {
tmp <- vector("list", 2)
for (i in seq_along(colGroups)) {
idx <- ((i-1)*3+2):((i*3)+1)
cols <- setdiff(idx, d_idx[i]:(d_idx[i]+j-1))
# ..cols means "look up one level"
tmp[[i]] <- cbind(dt[, ..cols], data.table(matrix(0, ncol=j)))
}
ll[[j+1]] <- do.call(cbind, tmp)
}
ans <- cbind(data.table(name=dt$name), rbindlist(ll))
setkey(ans, name)

First attempt (old):
Very interesting problem. I'd approach it using melt.data.table and dcast.data.table (from 1.8.11) as follows:

require(data.table)
require(reshape2)
# melt is S3 generic, calls melt.data.table, returns a data.table (very fast)
ans <- melt(dt, id=1, measure=2:7, variable.factor=FALSE)[,
grp := rep(colGroups, each=nrow(dt)*3)]
setkey(ans, name, grp)
ans <- ans[, list(variable=c(variable, variable[1:(.N-1)],
variable[1:(.N-2)]), value=c(value, value[-1],
value[-(1:2)]), id2=rep.int(1:3, 3:1)), list(name, grp)]
# dcast in reshape2 is not yet a S3 generic, have to call by full name
ans <- dcast.data.table(ans, name+id2~variable, fill=0L)[, id2 := NULL]

Benchmarking on 1e6 rows with same number of columns:

require(data.table)
require(reshape2)
set.seed(45)
N <- 1e6
dt <- cbind(data.table(name=paste("x", 1:N, sep="")),
matrix(sample(10, 6*N, TRUE), nrow=N))
setnames(dt, c("name", "a1", "a2", "a3", "b1", "b2", "b3"))
colGroups = c("a", "b")

system.time({
ans <- melt(dt, id=1, measure=2:7, variable.factor=FALSE)[,
grp := rep(colGroups, each=nrow(dt)*3)]
setkey(ans, name, grp)
ans <- ans[, list(variable=c(variable, variable[1:(.N-1)],
variable[1:(.N-2)]), value=c(value, value[-1],
value[-(1:2)]), id2=rep.int(1:3, 3:1)), list(name, grp)]
ans <- dcast.data.table(ans, name+id2~variable, fill=0L)[, id2 := NULL]

})

# user system elapsed
# 45.627 2.197 52.051

How to perform multiple row-wise operations with dependency with previous rows using [r] data.table (if possible)

Here is another possible approach:

dt[!is.na(EO_1), EO_3 := EO_1 * EO_2, by=.(GROUP)]
dt[ID!="ID_001", c("EO_1", "EO_2", "EO_3") :=
dt[,
{
eo1 <- EO_1[1L]; eo2 <- EO_2[1L]; eo3 <- EO_3[1L]
.SD[ID!="ID_001",
{
eo1 <- eo1 * eo2
eo2 <- eo1 * eo2 * eo3
eo3 <- eo1 * eo2
.(eo1, eo2, eo3)
},
by=.(ID)]
},
by=.(GROUP)][, -1L:-2L]
]

output:

       ID        EO_1         EO_2         EO_3 GROUP
1: ID_001 0.50000000 1.200000e+00 6.000000e-01 A
2: ID_002 0.60000000 4.320000e-01 2.592000e-01 A
3: ID_003 0.25920000 2.902376e-02 7.522960e-03 A
4: ID_004 0.00752296 1.642598e-06 1.235720e-08 A
5: ID_001 0.40000000 2.500000e+00 1.000000e+00 B
6: ID_002 1.00000000 2.500000e+00 2.500000e+00 B
7: ID_003 2.50000000 1.562500e+01 3.906250e+01 B
8: ID_004 39.06250000 2.384186e+04 9.313226e+05 B


Related Topics



Leave a reply



Submit