How to Recreate Same Documenttermmatrix with New (Test) Data

How to recreate same DocumentTermMatrix with new (test) data

If I understand correctly, you have made a dtm, and you want to make a new dtm from new documents that has the same columns (ie. terms) as the first dtm. If that's the case, then it should be a matter of sub-setting the second dtm by the terms in the first, perhaps something like this:

First set up some reproducible data...

This is your training data...

library(tm)
# make corpus for text mining (data comes from package, for reproducibility)
data("crude")
corpus1 <- Corpus(VectorSource(crude[1:10]))
# process text (your methods may differ)
skipWords <- function(x) removeWords(x, stopwords("english"))
funcs <- list(tolower, removePunctuation, removeNumbers,
stripWhitespace, skipWords)
crude1 <- tm_map(corpus1, FUN = tm_reduce, tmFuns = funcs)
crude1.dtm <- DocumentTermMatrix(crude1, control = list(wordLengths = c(3,10)))

And this is your testing data...

corpus2 <- Corpus(VectorSource(crude[15:20]))  
# process text (your methods may differ)
skipWords <- function(x) removeWords(x, stopwords("english"))
funcs <- list(tolower, removePunctuation, removeNumbers,
stripWhitespace, skipWords)
crude2 <- tm_map(corpus2, FUN = tm_reduce, tmFuns = funcs)
crude2.dtm <- DocumentTermMatrix(crude2, control = list(wordLengths = c(3,10)))

Here is the bit that does what you want:

Now we keep only the terms in the testing data that are present in the training data...

# convert to matrices for subsetting
crude1.dtm.mat <- as.matrix(crude1.dtm) # training
crude2.dtm.mat <- as.matrix(crude2.dtm) # testing

# subset testing data by colnames (ie. terms) or training data
xx <- data.frame(crude2.dtm.mat[,intersect(colnames(crude2.dtm.mat),
colnames(crude1.dtm.mat))])

Finally add to the testing data all the empty columns for terms in the training data that are not in the testing data...

# make an empty data frame with the colnames of the training data
yy <- read.table(textConnection(""), col.names = colnames(crude1.dtm.mat),
colClasses = "integer")

# add incols of NAs for terms absent in the
# testing data but present # in the training data
# following SchaunW's suggestion in the comments above
library(plyr)
zz <- rbind.fill(xx, yy)

So zz is a data frame of the testing documents, but has the same structure as the training documents (ie. same columns, though many of them contain NA, as SchaunW notes).

Is that along the lines of what you want?

removeSparseTerms with training and testing set

library(tm)
library(Rstem)
data(crude)
set.seed(1)

spl <- runif(length(crude)) < 0.7
train <- crude[spl]
test <- crude[!spl]

controls <- list(
tolower = TRUE,
removePunctuation = TRUE,
stopwords = stopwords("english"),
stemming = function(word) wordStem(word, language = "english")
)

train_dtm <- DocumentTermMatrix(train, controls)

train_dtm <- removeSparseTerms(train_dtm, 0.8)

test_dtm <- DocumentTermMatrix(
test,
c(controls, dictionary = list(dimnames(train_dtm)$Terms))
)

## train_dtm
## A document-term matrix (13 documents, 91 terms)
##
## Non-/sparse entries: 405/778
## Sparsity : 66%
## Maximal term length: 9
## Weighting : term frequency (tf)

## test_dtm
## A document-term matrix (7 documents, 91 terms)
##
## Non-/sparse entries: 149/488
## Sparsity : 77%
## Maximal term length: 9
## Weighting : term frequency (tf)

## all(dimnames(train_dtm)$Terms == dimnames(test_dtm)$Terms)
## [1] TRUE

I had issues using the default stemmer. Also there is a bounds option for controls, but I couldn't get the same results as removeSparseTerms when using it. I tried bounds = list(local = c(0.2 * length(train), Inf)) with floor and ceiling with no luck.

quanteda: dtm with new text and old vocabulary

You want dfm_match(), before converting to data.frame.

library(quanteda)
## Package version: 2.1.2

mytext <- c(oldtext = "This is my old text")
dtm_old <- dfm(mytext)
dtm_old
## Document-feature matrix of: 1 document, 5 features (0.0% sparse).
## features
## docs this is my old text
## oldtext 1 1 1 1 1

newtext <- c(newtext = "This is my new text")
dtm_new <- dfm(newtext)
dtm_new
## Document-feature matrix of: 1 document, 5 features (0.0% sparse).
## features
## docs this is my new text
## newtext 1 1 1 1 1

To match them up, use dfm_match() to conform the new dfm to the feature set and order of the old one:

dtm_matched <- dfm_match(dtm_new, featnames(dtm_old))
dtm_matched
## Document-feature matrix of: 1 document, 5 features (20.0% sparse).
## features
## docs this is my old text
## newtext 1 1 1 0 1

convert(dtm_matched, to = "data.frame")
## doc_id this is my old text
## 1 newtext 1 1 1 0 1

Produce a DocumentTermMatix that includes given terms in R

Here's one approach... does it work for your data? see further down for details that include the OP's data

# load text mining library    
library(tm)

# make first corpus for text mining (data comes from package, for reproducibility)
data("crude")
corpus1 <- Corpus(VectorSource(crude[1:10]))

# process text (your methods may differ)
skipWords <- function(x) removeWords(x, stopwords("english"))
funcs <- list(tolower, removePunctuation, removeNumbers,
stripWhitespace, skipWords, MinDocFrequency=5)
crude1 <- tm_map(corpus1, FUN = tm_reduce, tmFuns = funcs)
crude1.dtm <- TermDocumentMatrix(crude1, control = list(wordLengths = c(3,10)))

# prepare 2nd corpus
corpus2 <- Corpus(VectorSource(crude[11:20]))

# process text as above
skipWords <- function(x) removeWords(x, stopwords("english"))
funcs <- list(tolower, removePunctuation, removeNumbers, stripWhitespace, skipWords)
crude2 <- tm_map(corpus2, FUN = tm_reduce, tmFuns = funcs)
crude2.dtm <- TermDocumentMatrix(crude1, control = list(wordLengths = c(3,10)))

crude2.dtm.mat <- as.matrix(crude2.dtm)

# subset second corpus by words in first corpus
crude2.dtm.mat[rownames(crude2.dtm.mat) %in% crude1.dtm.freq, ]
Docs
Terms reut-00001.xml reut-00002.xml reut-00004.xml reut-00005.xml reut-00006.xml
oil 5 12 2 1 1
opec 0 15 0 0 0
prices 3 5 0 0 0
Docs
Terms reut-00007.xml reut-00008.xml reut-00009.xml reut-00010.xml reut-00011.xml
oil 7 4 3 5 9
opec 8 1 2 2 6
prices 5 1 2 1 9

UPDATE after data provided and comments I think this a bit closer to your question.

Here's the same process using document term matrices instead of TDMs (as I used above, a slight variation):

# load text mining library    
library(tm)

# make corpus for text mining (data comes from package, for reproducibility)
data("crude")
corpus1 <- Corpus(VectorSource(crude[1:10]))

# process text (your methods may differ)
skipWords <- function(x) removeWords(x, stopwords("english"))
funcs <- list(tolower, removePunctuation, removeNumbers, stripWhitespace, skipWords)
crude1 <- tm_map(corpus1, FUN = tm_reduce, tmFuns = funcs)
crude1.dtm <- DocumentTermMatrix(crude1, control = list(wordLengths = c(3,10)))

corpus2 <- Corpus(VectorSource(crude[11:20]))

# process text (your methods may differ)
skipWords <- function(x) removeWords(x, stopwords("english"))
funcs <- list(tolower, removePunctuation, removeNumbers,
stripWhitespace, skipWords, MinDocFrequency=5)
crude2 <- tm_map(corpus2, FUN = tm_reduce, tmFuns = funcs)
crude2.dtm <- DocumentTermMatrix(crude1, control = list(wordLengths = c(3,10)))

crude2.dtm.mat <- as.matrix(crude2.dtm)
crude2.dtm.mat[,colnames(crude2.dtm.mat) %in% crude1.dtm.freq ]

Terms
Docs oil opec prices
reut-00001.xml 5 0 3
reut-00002.xml 12 15 5
reut-00004.xml 2 0 0
reut-00005.xml 1 0 0
reut-00006.xml 1 0 0
reut-00007.xml 7 8 5
reut-00008.xml 4 1 1
reut-00009.xml 3 2 2
reut-00010.xml 5 2 1
reut-00011.xml 9 6 9

And here's a solution using the data added into the OP's question

text <- c('saying text is good',
'saying text once and saying text twice is better',
'saying text text text is best',
'saying text once is still ok',
'not saying it at all is bad',
'because text is a good thing',
'we all like text',
'even though sometimes it is missing')

validationText <- c("This has different words in it.",
"But I still want to count",
"the occurence of text",
"for example")

TextCorpus <- Corpus(VectorSource(text))
ValiTextCorpus <- Corpus(VectorSource(validationText))

Control = list(stopwords=TRUE, removePunctuation=TRUE, removeNumbers=TRUE, MinDocFrequency=5)

TextDTM = DocumentTermMatrix(TextCorpus, Control)
ValiTextDTM = DocumentTermMatrix(ValiTextCorpus, Control)

# find high frequency terms in TextDTM
(TextDTM.hifreq <- findFreqTerms(TextDTM, 5))
[1] "saying" "text"

# find out how many times each high freq word occurs in TextDTM
TextDTM.mat <- as.matrix(TextDTM)
colSums(TextDTM.mat[,TextDTM.hifreq])
saying text
6 9

Here are the key lines, subset the second DTM based on the list of high-frequency words from the first DTM. In this case I've used the intersect function since the vector of high frequency words includes a word that is not in the second corpus at all (and intersect seems to handle that better than %in%)

# now look into second DTM
ValiTextDTM.mat <- as.matrix(ValiTextDTM)
common <- data.frame(ValiTextDTM.mat[, intersect(colnames(ValiTextDTM.mat), TextDTM.hifreq) ])
names(common) <- intersect(colnames(ValiTextDTM.mat), TextDTM.hifreq)
text
1 0
2 0
3 1
4 0

How to find the total count of the high freq word(s) in the second corpus:

colSums(common)
text
1

Create Document Term Matrix with N-Grams in R

Unfortunately tm has some quirks that are annoying and not always clear. First of all, tokenizing doesn't seem to work on corpera created Corpus. You need to use VCorpus for this.

So change the line data_corpus = Corpus(DataframeSource(data)) to data_corpus = VCorpus(DataframeSource(data)).

That is one issue tackled. Now the corpus will work for tokenizing but now you will run into an issue with tokenize_ngrams. You will get the following error:

Input must be a character vector of any length or a list of character
vectors, each of which has a length of 1.

when you run this line:dtm_ngram = DocumentTermMatrix(data_cropus, control_list_ngram)

To solve this, and not have a dependency on the tokenizer package, you can use the following function to tokenize the data.

NLP_tokenizer <- function(x) {
unlist(lapply(ngrams(words(x), 1:3), paste, collapse = "_"), use.names = FALSE)
}

This uses the ngrams function from the NLP package which is loaded when you load the tm package. 1:3 tells it to create ngrams from 1 to 3 words. So your control_list_ngram should look like this:

control_list_ngram = list(tokenize = NLP_tokenizer,
removePunctuation = FALSE,
removeNumbers = FALSE,
stopwords = stopwords("english"),
tolower = T,
stemming = T,
weighting = function(x)
weightTf(x)
)

Personally I would use the quanteda package for all of this work. But for now this should help you.

tm: read in data frame, keep text id's, construct DTM and join to other dataset

First, some example data from https://stackoverflow.com/a/15506875/1036500

examp1 <- "When discussing performance with colleagues, teaching, sending a bug report or searching for guidance on mailing lists and here on SO, a reproducible example is often asked and always helpful. What are your tips for creating an excellent example? How do you paste data structures from r in a text format? What other information should you include? Are there other tricks in addition to using dput(), dump() or structure()? When should you include library() or require() statements? Which reserved words should one avoid, in addition to c, df, data, etc? How does one make a great r reproducible example?"
examp2 <- "Sometimes the problem really isn't reproducible with a smaller piece of data, no matter how hard you try, and doesn't happen with synthetic data (although it's useful to show how you produced synthetic data sets that did not reproduce the problem, because it rules out some hypotheses). Posting the data to the web somewhere and providing a URL may be necessary. If the data can't be released to the public at large but could be shared at all, then you may be able to offer to e-mail it to interested parties (although this will cut down the number of people who will bother to work on it). I haven't actually seen this done, because people who can't release their data are sensitive about releasing it any form, but it would seem plausible that in some cases one could still post data if it were sufficiently anonymized/scrambled/corrupted slightly in some way. If you can't do either of these then you probably need to hire a consultant to solve your problem"
examp3 <- "You are most likely to get good help with your R problem if you provide a reproducible example. A reproducible example allows someone else to recreate your problem by just copying and pasting R code. There are four things you need to include to make your example reproducible: required packages, data, code, and a description of your R environment. Packages should be loaded at the top of the script, so it's easy to see which ones the example needs. The easiest way to include data in an email is to use dput() to generate the R code to recreate it. For example, to recreate the mtcars dataset in R, I'd perform the following steps: Run dput(mtcars) in R Copy the output In my reproducible script, type mtcars <- then paste. Spend a little bit of time ensuring that your code is easy for others to read: make sure you've used spaces and your variable names are concise, but informative, use comments to indicate where your problem lies, do your best to remove everything that is not related to the problem. The shorter your code is, the easier it is to understand. Include the output of sessionInfo() as a comment. This summarises your R environment and makes it easy to check if you're using an out-of-date package. You can check you have actually made a reproducible example by starting up a fresh R session and pasting your script in. Before putting all of your code in an email, consider putting it on http://gist.github.com/. It will give your code nice syntax highlighting, and you don't have to worry about anything getting mangled by the email system."
examp4 <- "Do your homework before posting: If it is clear that you have done basic background research, you are far more likely to get an informative response. See also Further Resources further down this page. Do help.search(keyword) and apropos(keyword) with different keywords (type this at the R prompt). Do RSiteSearch(keyword) with different keywords (at the R prompt) to search R functions, contributed packages and R-Help postings. See ?RSiteSearch for further options and to restrict searches. Read the online help for relevant functions (type ?functionname, e.g., ?prod, at the R prompt) If something seems to have changed in R, look in the latest NEWS file on CRAN for information about it. Search the R-faq and the R-windows-faq if it might be relevant (http://cran.r-project.org/faqs.html) Read at least the relevant section in An Introduction to R If the function is from a package accompanying a book, e.g., the MASS package, consult the book before posting. The R Wiki has a section on finding functions and documentation"
examp5 <- "Before asking a technical question by e-mail, or in a newsgroup, or on a website chat board, do the following: Try to find an answer by searching the archives of the forum you plan to post to. Try to find an answer by searching the Web. Try to find an answer by reading the manual. Try to find an answer by reading a FAQ. Try to find an answer by inspection or experimentation. Try to find an answer by asking a skilled friend. If you're a programmer, try to find an answer by reading the source code. When you ask your question, display the fact that you have done these things first; this will help establish that you're not being a lazy sponge and wasting people's time. Better yet, display what you have learned from doing these things. We like answering questions for people who have demonstrated they can learn from the answers. Use tactics like doing a Google search on the text of whatever error message you get (searching Google groups as well as Web pages). This might well take you straight to fix documentation or a mailing list thread answering your question. Even if it doesn't, saying “I googled on the following phrase but didn't get anything that looked promising” is a good thing to do in e-mail or news postings requesting help, if only because it records what searches won't help. It will also help to direct other people with similar problems to your thread by linking the search terms to what will hopefully be your problem and resolution thread. Take your time. Do not expect to be able to solve a complicated problem with a few seconds of Googling. Read and understand the FAQs, sit back, relax and give the problem some thought before approaching experts. Trust us, they will be able to tell from your questions how much reading and thinking you did, and will be more willing to help if you come prepared. Don't instantly fire your whole arsenal of questions just because your first search turned up no answers (or too many). Prepare your question. Think it through. Hasty-sounding questions get hasty answers, or none at all. The more you do to demonstrate that having put thought and effort into solving your problem before seeking help, the more likely you are to actually get help. Beware of asking the wrong question. If you ask one that is based on faulty assumptions, J. Random Hacker is quite likely to reply with a uselessly literal answer while thinking Stupid question..., and hoping the experience of getting what you asked for rather than what you needed will teach you a lesson."

Put example data in a data frame...

df <- data.frame(ID = sapply(1:5, function(i) paste0(sample(letters, 5), collapse = "")),
txt = sapply(1:5, function(i) eval(parse(text=paste0("examp",i))))
)

Here is the answer to "Question 1: How do I convert this data frame into a corpus and get to keep ID information?"

Use DataframeSource and readerControl to convert data frame to corpus (from https://stackoverflow.com/a/15693766/1036500)...

require(tm)
m <- list(ID = "ID", Content = "txt")
myReader <- readTabular(mapping = m)
mycorpus <- Corpus(DataframeSource(df), readerControl = list(reader = myReader))

# Manually keep ID information from https://stackoverflow.com/a/14852502/1036500
for (i in 1:length(mycorpus)) {
attr(mycorpus[[i]], "ID") <- df$ID[i]
}

Now some example data for your second question...

Make Document Term Matrix from https://stackoverflow.com/a/15506875/1036500...

skipWords <- function(x) removeWords(x, stopwords("english"))
funcs <- list(content_transformer(tolower), removePunctuation, removeNumbers, stripWhitespace, skipWords)
a <- tm_map(mycorpus, FUN = tm_reduce, tmFuns = funcs)
mydtm <- DocumentTermMatrix(a, control = list(wordLengths = c(3,10)))
inspect(mydtm)

Make another example dataset to join to...

df2 <- data.frame(ID = df$ID,
date = seq(Sys.Date(), length.out=5, by="1 week"),
topic = sapply(1:5, function(i) paste0(sample(LETTERS, 3), collapse = "")) ,
sentiment = sample(c("+ve", "-ve"), 5, replace = TRUE)
)

Here is the answer to "Question 2: After getting a dtm, how can I join it with another data set by ID?"

Use merge to join the dtm to example dataset of dates, topics, sentiment...

mydtm_df <- data.frame(as.matrix(mydtm))
# merge by row.names from https://stackoverflow.com/a/7739757/1036500
merged <- merge(df2, mydtm_df, by.x = "ID", by.y = "row.names" )
head(merged)

ID date.x topic sentiment able actually addition allows also although
1 cpjmn 2013-11-07 XRT -ve 0 0 2 0 0 0
2 jkdaf 2013-11-28 TYJ -ve 0 0 0 0 1 0
3 jstpa 2013-12-05 SVB -ve 2 1 0 0 1 0
4 sfywr 2013-11-14 OMG -ve 1 1 0 0 0 2
5 ylaqr 2013-11-21 KDY +ve 0 1 0 1 0 0
always answer answering answers anything archives are arsenal ask asked asking
1 1 0 0 0 0 0 1 0 0 1 0
2 0 0 0 0 0 0 0 0 0 0 0
3 0 8 2 3 1 1 0 1 2 1 3
4 0 0 0 0 0 0 0 0 0 0 0
5 0 0 0 0 1 0 0 0 0 0 0

There, now you have:

  1. Answers to your two questions (normally this site is just one question per... question)
  2. Several kinds of example data that you can use when you ask your next question (makes your question a lot more engaging for folks who might want to answer)
  3. Hopefully a sense that the answers to your questions can already be found elsewhere on the stackoverflow r tag, if you can think of how to break your questions down into smaller steps.

If this doesn't answer your questions, ask another question and include code to reproduce your use-case as exactly as you can. If it does answer your question, then you should mark it as accepted (at least until a better one comes along, eg. Tyler might pop in with a one-liner from his impressive qdap package...)



Related Topics



Leave a reply



Submit