Understanding Exactly When a Data.Table Is a Reference to (Vs a Copy Of) Another Data.Table

Understanding exactly when a data.table is a reference to (vs a copy of) another data.table

Yes, it's subassignment in R using <- (or = or ->) that makes a copy of the whole object. You can trace that using tracemem(DT) and .Internal(inspect(DT)), as below. The data.table features := and set() assign by reference to whatever object they are passed. So if that object was previously copied (by a subassigning <- or an explicit copy(DT)) then it's the copy that gets modified by reference.

DT <- data.table(a = c(1, 2), b = c(11, 12)) 
newDT <- DT

.Internal(inspect(DT))
# @0000000003B7E2A0 19 VECSXP g0c7 [OBJ,NAM(2),ATT] (len=2, tl=100)
# @00000000040C2288 14 REALSXP g0c2 [NAM(2)] (len=2, tl=0) 1,2
# @00000000040C2250 14 REALSXP g0c2 [NAM(2)] (len=2, tl=0) 11,12
# ATTRIB: # ..snip..

.Internal(inspect(newDT)) # precisely the same object at this point
# @0000000003B7E2A0 19 VECSXP g0c7 [OBJ,NAM(2),ATT] (len=2, tl=100)
# @00000000040C2288 14 REALSXP g0c2 [NAM(2)] (len=2, tl=0) 1,2
# @00000000040C2250 14 REALSXP g0c2 [NAM(2)] (len=2, tl=0) 11,12
# ATTRIB: # ..snip..

tracemem(newDT)
# [1] "<0x0000000003b7e2a0"

newDT$b[2] <- 200
# tracemem[0000000003B7E2A0 -> 00000000040ED948]:
# tracemem[00000000040ED948 -> 00000000040ED830]: .Call copy $<-.data.table $<-

.Internal(inspect(DT))
# @0000000003B7E2A0 19 VECSXP g0c7 [OBJ,NAM(2),TR,ATT] (len=2, tl=100)
# @00000000040C2288 14 REALSXP g0c2 [NAM(2)] (len=2, tl=0) 1,2
# @00000000040C2250 14 REALSXP g0c2 [NAM(2)] (len=2, tl=0) 11,12
# ATTRIB: # ..snip..

.Internal(inspect(newDT))
# @0000000003D97A58 19 VECSXP g0c7 [OBJ,NAM(2),ATT] (len=2, tl=100)
# @00000000040ED7F8 14 REALSXP g0c2 [NAM(2)] (len=2, tl=0) 1,2
# @00000000040ED8D8 14 REALSXP g0c2 [NAM(2)] (len=2, tl=0) 11,200
# ATTRIB: # ..snip..

Notice how even the a vector was copied (different hex value indicates new copy of vector), even though a wasn't changed. Even the whole of b was copied, rather than just changing the elements that need to be changed. That's important to avoid for large data, and why := and set() were introduced to data.table.

Now, with our copied newDT we can modify it by reference :

newDT
# a b
# [1,] 1 11
# [2,] 2 200

newDT[2, b := 400]
# a b # See FAQ 2.21 for why this prints newDT
# [1,] 1 11
# [2,] 2 400

.Internal(inspect(newDT))
# @0000000003D97A58 19 VECSXP g0c7 [OBJ,NAM(2),ATT] (len=2, tl=100)
# @00000000040ED7F8 14 REALSXP g0c2 [NAM(2)] (len=2, tl=0) 1,2
# @00000000040ED8D8 14 REALSXP g0c2 [NAM(2)] (len=2, tl=0) 11,400
# ATTRIB: # ..snip ..

Notice that all 3 hex values (the vector of column points, and each of the 2 columns) remain unchanged. So it was truly modified by reference with no copies at all.

Or, we can modify the original DT by reference :

DT[2, b := 600]
# a b
# [1,] 1 11
# [2,] 2 600

.Internal(inspect(DT))
# @0000000003B7E2A0 19 VECSXP g0c7 [OBJ,NAM(2),ATT] (len=2, tl=100)
# @00000000040C2288 14 REALSXP g0c2 [NAM(2)] (len=2, tl=0) 1,2
# @00000000040C2250 14 REALSXP g0c2 [NAM(2)] (len=2, tl=0) 11,600
# ATTRIB: # ..snip..

Those hex values are the same as the original values we saw for DT above. Type example(copy) for more examples using tracemem and comparison to data.frame.

Btw, if you tracemem(DT) then DT[2,b:=600] you'll see one copy reported. That is a copy of the first 10 rows that the print method does. When wrapped with invisible() or when called within a function or script, the print method isn't called.

All this applies inside functions too; i.e., := and set() do not copy on write, even within functions. If you need to modify a local copy, then call x=copy(x) at the start of the function. But, remember data.table is for large data (as well as faster programming advantages for small data). We deliberately don't want to copy large objects (ever). As a result we don't need to allow for the usual 3* working memory factor rule of thumb. We try to only need working memory as large as one column (i.e. a working memory factor of 1/ncol rather than 3).

Is R data.table documented to pass by reference as argument?

I think what you're being surprised about is actually R behavior, which is why it's not specifically documented in data.table (maybe it should be anyway, as the implications are more important for data.table).

You were surprised that the object passed to a function had the same address, but this is the same for base R as well:

x = 1:10
address(x)
# [1] "0x7fb7d4b6c820"
(function(y) {print(address(y))})(x)
# [1] "0x7fb7d4b6c820"

What's being copied in the function environment is the pointer to x. Moreover, for base R, the parent x is immutable:

foo = function(y) {
print(address(y))
y[1L] = 2L
print(address(y))
}
foo(x)
# [1] "0x7fb7d4b6c820"
# [1] "0x7fb7d4e11d28"

That is, as soon as we try to edit y, a copy is made. This is related to reference counting -- you can see some work by Luke Tierney on this, e.g. this presentation

The difference for data.table is that data.table enables edit permissions for the parent object -- a double-edged sword as I think you know.

Why is R data.table adding columns to a another data table that I did not reference?

Yes, data.table changes its values by reference. If you'd like to retain a copy of the original, you should use copy:

library(data.table)         
DT1 <- data.table(x = 1:100)
DT2 <- DT1
identical(DT1, DT2)
#> [1] TRUE
DT1[, y := x + 1]
identical(DT1, DT2)
#> [1] TRUE
DT2 <- copy(DT1)
DT2[, y := x + 2]
identical(DT1, DT2)
#> [1] FALSE

Looking up data in another data.table from j

First, the join columns should be the same class, so we can either convert main_dt$End to integer, or main_df$Start and lookup_dt$Year to numeric. I'll choose the first:

main_dt[, End := as.integer(End)]
main_dt
# Start End
# <int> <int>
# 1: 1 2
# 2: 2 2

From here, we can do a joining-assignment:

main_dt[, Amount := lookup_dt[.SD, sum(Amount), on = .(Year >= Start, Year <= End), by = .EACHI]$V1 ]
main_dt
# Start End Amount
# <int> <int> <num>
# 1: 1 2 30
# 2: 2 2 20

If you're somewhat familiar with data.table, note that .SD referenced is actually the contents of main_dt, so lookup_dt[.SD,...] is effectively "main_dt left join lookup_dt". From there, the on= should be normal, and sum(Amount) is what you want to aggregate. The only new thing introduced here is the use of by=.EACHI, which can be confusing; some links for that:

  • https://rdatatable.gitlab.io/data.table/reference/special-symbols.html
  • https://stackoverflow.com/a/27004566/3358272

r - Need a workaround for copying a data.table in a reference class

You should use data.table::copy instead of the default reference class copy method:

library(data.table)

Example <- setRefClass("Example",
fields = list(
data1 = "data.table"
),

method = list(
tryToCopyData1 = function(){
data_temp <- data.table::copy(data1)
}
)
)

example <- Example$new()

example$data1 <- data.table(x=rep(c("a","b","c"),each=3), y=c(1,3,6), v=1:9)

example$tryToCopyData1()

R data.table gets modified AFTER I've changed it?

Yes, it's intentional & very much by design. We hope new users will checkout our vignettes (e.g. 1, 2, 3) to get an idea of the why.

data.table is designed with large data sets in mind (e.g. 1GB, 10GB, or 50GB). Wasting memory with such data can be the difference between analysis working and being impossible. You can see the impact of this in this benchmark -- several alternatives to data.table simply fail to complete tasks on a 50GB data set, even though the machine has plenty of memory (128GB).

The reference semantics you observe are a necessary tradeoff to achieve this without the need to spill to disk or parallelize to different machines.

My recommendation is to be aware of this behavior, and use it to be more careful in your analysis -- do you really need a new table that has the same number of rows? Can what you want be achieved in a different way?

copy and as.data.table are always available as a workaround when needed, but I think part of using data.table successfully entails a slight change in approach.

PS If there's anything arcane or hard to understand in the vignettes, or you otherwise have feedback, we would love to hear it -- feel free to file an Issue.

Find all NAs in R data.table

For the sake of completeness, here is a Minimal, Reproducible Example where only row 6 is complete, i.e., without any NA and the columns are of a different types:

library(data.table)
options(datatable.print.class = TRUE)
n <- 7
dt <- data.table(1:n, pi * as.numeric(1:n),
letters[1:n], rep(c(TRUE, FALSE), length.out = n),
factor(LETTERS[1:n]))

for (i in 1:ncol(dt)) set(dt, i, i, NA)
for (i in 1:ncol(dt)) set(dt, nrow(dt), i, NA)
dt
      V1        V2     V3     V4     V5
<int> <num> <char> <lgcl> <fctr>
1: NA 3.141593 a TRUE A
2: 2 NA b FALSE B
3: 3 9.424778 <NA> TRUE C
4: 4 12.566371 d NA D
5: 5 15.707963 e TRUE <NA>
6: 6 18.849556 f FALSE F
7: NA NA <NA> NA <NA>

alodi's answer

works as expected:

dt[!complete.cases(dt)]
      V1        V2     V3     V4     V5
<int> <num> <char> <lgcl> <fctr>
1: NA 3.141593 a TRUE A
2: 2 NA b FALSE B
3: 3 9.424778 <NA> TRUE C
4: 4 12.566371 d NA D
5: 5 15.707963 e TRUE <NA>
6: NA NA <NA> NA <NA>

clemenskuehn's answer

fails

dt[is.na(rowSums(dt))]
Error: 'x' must be numeric

because it assumes all columns of dt are numeric.

Count the NAs in each row

dt[rowSums(is.na(dt)) > 0]
      V1        V2     V3     V4     V5
<int> <num> <char> <lgcl> <fctr>
1: NA 3.141593 a TRUE A
2: 2 NA b FALSE B
3: 3 9.424778 <NA> TRUE C
4: 4 12.566371 d NA D
5: 5 15.707963 e TRUE <NA>
6: NA NA <NA> NA <NA>

This displays all rows where at least one NA is found.



Related Topics



Leave a reply



Submit