Can Rbind Be Parallelized in R

Can rbind be parallelized in R?

I doubt that you can get this to work faster by parallellizing it: apart from the fact that you would probably have to write it yourself (thread one first rbinds item 1 and 2, while thread two rbinds items 3 and 4 etc., and when they're done, the results are 'rebound', something like that - I don't see a non-C way of improving this), it is going to involve copying large amounts of data between your threads, which is typically the thing that goes slow in the first place.

In C, you can share objects between threads, so then you could have all your threads write in the same memory. I wish you the best of luck with that :-)

Finally, as an aside: rbinding data.frames is just slow. If you know up front that the structure of all your data.frames is exactly the same, and it doesn't contain pure character columns, you can probably use the trick from this answer to one of my questions. If your data.frame contains character columns, I suspect that your best off handling these separately (do.call(c, lapply(LIST, "[[", "myCharColName"))) and then performing the trick with the rest, after which you can reunite them.

Efficient way to rbind data.frames with different columns

UPDATE: See this updated answer instead.

UPDATE (eddi): This has now been implemented in version 1.8.11 as a fill argument to rbind. For example:

DT1 = data.table(a = 1:2, b = 1:2)
DT2 = data.table(a = 3:4, c = 1:2)

rbind(DT1, DT2, fill = TRUE)
# a b c
#1: 1 1 NA
#2: 2 2 NA
#3: 3 NA 1
#4: 4 NA 2


FR #4790 added now - rbind.fill (from plyr) like functionality to merge list of data.frames/data.tables

Note 1:

This solution uses data.table's rbindlist function to "rbind" list of data.tables and for this, be sure to use version 1.8.9 because of this bug in versions < 1.8.9.

Note 2:

rbindlist when binding lists of data.frames/data.tables, as of now, will retain the data type of the first column. That is, if a column in first data.frame is character and the same column in the 2nd data.frame is "factor", then, rbindlist will result in this column being a character. So, if your data.frame consisted of all character columns, then, your solution with this method will be identical to the plyr method. If not, the values will still be the same, but some columns will be character instead of factor. You'll have to convert to "factor" yourself after. Hopefully this behaviour will change in the future.

And now here's using data.table (and benchmarking comparison with rbind.fill from plyr):

require(data.table)
rbind.fill.DT <- function(ll) {
# changed sapply to lapply to return a list always
all.names <- lapply(ll, names)
unq.names <- unique(unlist(all.names))
ll.m <- rbindlist(lapply(seq_along(ll), function(x) {
tt <- ll[[x]]
setattr(tt, 'class', c('data.table', 'data.frame'))
data.table:::settruelength(tt, 0L)
invisible(alloc.col(tt))
tt[, c(unq.names[!unq.names %chin% all.names[[x]]]) := NA_character_]
setcolorder(tt, unq.names)
}))
}

rbind.fill.PLYR <- function(ll) {
rbind.fill(ll)
}

require(microbenchmark)
microbenchmark(t1 <- rbind.fill.DT(ll), t2 <- rbind.fill.PLYR(ll), times=10)
# Unit: seconds
# expr min lq median uq max neval
# t1 <- rbind.fill.DT(ll) 10.8943 11.02312 11.26374 11.34757 11.51488 10
# t2 <- rbind.fill.PLYR(ll) 121.9868 134.52107 136.41375 184.18071 347.74724 10


# for comparison change t2 to data.table
setattr(t2, 'class', c('data.table', 'data.frame'))
data.table:::settruelength(t2, 0L)
invisible(alloc.col(t2))
setcolorder(t2, unique(unlist(sapply(ll, names))))

identical(t1, t2) # [1] TRUE

It should be noted that plyr's rbind.fill edges past this particular data.table solution until list size of about 500.

Benchmarking plot:

Here's the plot on runs with list length of data.frames with seq(1000, 10000, by=1000). I've used microbenchmark with 10 reps on each of these different list lengths.

Sample Image

Benchmarking gist:

Here's the gist for benchmarking, in case anyone wants to replicate the results.

Performance of rbind.data.frame

Can you build your matrices with numeric variables only and convert to a factor at the end? rbind is a lot faster on numeric matrices.

On my system, using data frames:

> system.time(result<-do.call(rbind, someParts))
user system elapsed
2.628 0.000 2.636

Building the list with all numeric matrices instead:

onerowdfr2 <- matrix(as.numeric(onerowdfr), nrow=1)
someParts2<-lapply(rbinom(200, 1, 14/200)*6+1,
function(reps){onerowdfr2[rep(1, reps),]})

results in a lot faster rbind.

> system.time(result2<-do.call(rbind, someParts2))
user system elapsed
0.001 0.000 0.001

EDIT: Here's another possibility; it just combines each column in turn.

> system.time({
+ n <- 1:ncol(someParts[[1]])
+ names(n) <- names(someParts[[1]])
+ result <- as.data.frame(lapply(n, function(i)
+ unlist(lapply(someParts, `[[`, i))))
+ })
user system elapsed
0.810 0.000 0.813

Still not nearly as fast as using matrices though.

EDIT 2:

If you only have numerics and factors, it's not that hard to convert everything to numeric, rbind them, and convert the necessary columns back to factors. This assumes all factors have exactly the same levels. Converting to a factor from an integer is also faster than from a numeric so I force to integer first.

someParts2 <- lapply(someParts, function(x)
matrix(unlist(x), ncol=ncol(x)))
result<-as.data.frame(do.call(rbind, someParts2))
a <- someParts[[1]]
f <- which(sapply(a, class)=="factor")
for(i in f) {
lev <- levels(a[[i]])
result[[i]] <- factor(as.integer(result[[i]]), levels=seq_along(lev), labels=lev)
}

The timing on my system is:

   user  system elapsed 
0.090 0.00 0.091

How to parallelize while loops?

Actually you don't need to parallelize the while loop. You can vectorize your operations over x like below

iter <- 1000
myvec <- c()
while (is.null(myvec) || nrow(myvec) <= iter) {
x <- matrix(rnorm(iter * 10, mean = 0, sd = 1), ncol = 10)
myvec <- rbind(myvec, subset(x, rowSums(x) > 2.5))
}
myvec <- head(myvec, iter)

or

iter <- 1000
myvec <- list()
nl <- 0
while (nl < iter) {
x <- matrix(rnorm(iter * 10, mean = 0, sd = 1), ncol = 10)
v <- subset(x, rowSums(x) > 2.5)
nl <- nl + nrow(v)
myvec[[length(myvec) + 1]] <- v
}
myvec <- head(do.call(rbind, myvec), iter)

which would be much faster even if you have large iter, I believe.



Related Topics



Leave a reply



Submit