Make Dataframe of Top N Frequent Terms for Multiple Corpora Using Tm Package in R

Make dataframe of top N frequent terms for multiple corpora using tm package in R

Here's one way to find the top N terms in a document term matrix. Briefly, you convert the dtm to a matrix, then sort by row sums:

# load text mining library    
library(tm)

# make corpus for text mining (data comes from package, for reproducibility)
data("crude")
corpus <- Corpus(VectorSource(crude))

# process text (your methods may differ)
skipWords <- function(x) removeWords(x, stopwords("english"))
funcs <- list(tolower, removePunctuation, removeNumbers, stripWhitespace, skipWords)
a <- tm_map(corpus, FUN = tm_reduce, tmFuns = funcs)
a.dtm1 <- TermDocumentMatrix(a, control = list(wordLengths = c(3,10)))

Here's the method in your Q, which returns words in alpha order, not always very useful, as you note...

N <- 10
findFreqTerms(a.dtm1, N)

[1] "barrel" "barrels" "bpd" "crude" "dlrs" "government" "industry" "kuwait"
[9] "market" "meeting" "minister" "mln" "month" "official" "oil" "opec"
[17] "pct" "price" "prices" "production" "reuter" "saudi" "sheikh" "the"
[25] "world"

And here's what you can do to get the top N words in order of their abundance:

m <- as.matrix(a.dtm1)
v <- sort(rowSums(m), decreasing=TRUE)
head(v, N)

oil prices opec mln the bpd dlrs crude market reuter
86 48 47 31 26 23 23 21 21 20

For several document term matrices, you could do something like this:

# make a list of the dtms
dtm_list <- list(a.dtm1, b.dtm1, c.dtm1, d.dtm1)
# apply the rowsums function to each item of the list
lapply(dtm_list, function(x) sort(rowSums(as.matrix(x)), decreasing=TRUE))

Is that what you want to do?

Hat-tip to Ian Fellows' wordcloud package where I first saw this method.

UPDATE: following the comment below, here's some more detail...

Here's some data to make a reproducible example with multiple corpora:

examp1 <- "When discussing performance with colleagues, teaching, sending a bug report or searching for guidance on mailing lists and here on SO, a reproducible example is often asked and always helpful. What are your tips for creating an excellent example? How do you paste data structures from r in a text format? What other information should you include? Are there other tricks in addition to using dput(), dump() or structure()? When should you include library() or require() statements? Which reserved words should one avoid, in addition to c, df, data, etc? How does one make a great r reproducible example?"

examp2 <- "Sometimes the problem really isn't reproducible with a smaller piece of data, no matter how hard you try, and doesn't happen with synthetic data (although it's useful to show how you produced synthetic data sets that did not reproduce the problem, because it rules out some hypotheses). Posting the data to the web somewhere and providing a URL may be necessary. If the data can't be released to the public at large but could be shared at all, then you may be able to offer to e-mail it to interested parties (although this will cut down the number of people who will bother to work on it). I haven't actually seen this done, because people who can't release their data are sensitive about releasing it any form, but it would seem plausible that in some cases one could still post data if it were sufficiently anonymized/scrambled/corrupted slightly in some way. If you can't do either of these then you probably need to hire a consultant to solve your problem"

examp3 <- "You are most likely to get good help with your R problem if you provide a reproducible example. A reproducible example allows someone else to recreate your problem by just copying and pasting R code. There are four things you need to include to make your example reproducible: required packages, data, code, and a description of your R environment. Packages should be loaded at the top of the script, so it's easy to see which ones the example needs. The easiest way to include data in an email is to use dput() to generate the R code to recreate it. For example, to recreate the mtcars dataset in R, I'd perform the following steps: Run dput(mtcars) in R Copy the output In my reproducible script, type mtcars <- then paste. Spend a little bit of time ensuring that your code is easy for others to read: make sure you've used spaces and your variable names are concise, but informative, use comments to indicate where your problem lies, do your best to remove everything that is not related to the problem. The shorter your code is, the easier it is to understand. Include the output of sessionInfo() as a comment. This summarises your R environment and makes it easy to check if you're using an out-of-date package. You can check you have actually made a reproducible example by starting up a fresh R session and pasting your script in. Before putting all of your code in an email, consider putting it on http://gist.github.com/. It will give your code nice syntax highlighting, and you don't have to worry about anything getting mangled by the email system."

examp4 <- "Do your homework before posting: If it is clear that you have done basic background research, you are far more likely to get an informative response. See also Further Resources further down this page. Do help.search(keyword) and apropos(keyword) with different keywords (type this at the R prompt). Do RSiteSearch(keyword) with different keywords (at the R prompt) to search R functions, contributed packages and R-Help postings. See ?RSiteSearch for further options and to restrict searches. Read the online help for relevant functions (type ?functionname, e.g., ?prod, at the R prompt) If something seems to have changed in R, look in the latest NEWS file on CRAN for information about it. Search the R-faq and the R-windows-faq if it might be relevant (http://cran.r-project.org/faqs.html) Read at least the relevant section in An Introduction to R If the function is from a package accompanying a book, e.g., the MASS package, consult the book before posting. The R Wiki has a section on finding functions and documentation"

examp5 <- "Before asking a technical question by e-mail, or in a newsgroup, or on a website chat board, do the following: Try to find an answer by searching the archives of the forum you plan to post to. Try to find an answer by searching the Web. Try to find an answer by reading the manual. Try to find an answer by reading a FAQ. Try to find an answer by inspection or experimentation. Try to find an answer by asking a skilled friend. If you're a programmer, try to find an answer by reading the source code. When you ask your question, display the fact that you have done these things first; this will help establish that you're not being a lazy sponge and wasting people's time. Better yet, display what you have learned from doing these things. We like answering questions for people who have demonstrated they can learn from the answers. Use tactics like doing a Google search on the text of whatever error message you get (searching Google groups as well as Web pages). This might well take you straight to fix documentation or a mailing list thread answering your question. Even if it doesn't, saying “I googled on the following phrase but didn't get anything that looked promising” is a good thing to do in e-mail or news postings requesting help, if only because it records what searches won't help. It will also help to direct other people with similar problems to your thread by linking the search terms to what will hopefully be your problem and resolution thread. Take your time. Do not expect to be able to solve a complicated problem with a few seconds of Googling. Read and understand the FAQs, sit back, relax and give the problem some thought before approaching experts. Trust us, they will be able to tell from your questions how much reading and thinking you did, and will be more willing to help if you come prepared. Don't instantly fire your whole arsenal of questions just because your first search turned up no answers (or too many). Prepare your question. Think it through. Hasty-sounding questions get hasty answers, or none at all. The more you do to demonstrate that having put thought and effort into solving your problem before seeking help, the more likely you are to actually get help. Beware of asking the wrong question. If you ask one that is based on faulty assumptions, J. Random Hacker is quite likely to reply with a uselessly literal answer while thinking Stupid question..., and hoping the experience of getting what you asked for rather than what you needed will teach you a lesson."

Now let's process the example text a little, in the usual way. First convert the character vectors to corpora.

library(tm)
list_examps <- lapply(1:5, function(i) eval(parse(text=paste0("examp",i))))
list_corpora <- lapply(1:length(list_examps), function(i) Corpus(VectorSource(list_examps[[i]])))

Now remove stopwords, numbers, punctuation, etc.

skipWords <- function(x) removeWords(x, stopwords("english"))
funcs <- list(tolower, removePunctuation, removeNumbers, stripWhitespace, skipWords)
list_corpora1 <- lapply(1:length(list_corpora), function(i) tm_map(list_corpora[[i]], FUN = tm_reduce, tmFuns = funcs))

Convert processed corpora to term document matrix:

list_dtms <- lapply(1:length(list_corpora1), function(i) TermDocumentMatrix(list_corpora1[[i]], control = list(wordLengths = c(3,10))))

Get the most frequently occuring words in each corpus:

top_words <- lapply(1:length(list_dtms), function(x)  sort(rowSums(as.matrix(list_dtms[[x]])), decreasing=TRUE))

And reshape it into a dataframe according to the specified form:

library(plyr)
top_words_df <- t(ldply(1:length(top_words), function(i) head(names(top_words[[i]]),10)))
colnames(top_words_df) <- lapply(1:length(list_dtms), function(i) paste0("corpus",i))
top_words_df

corpus1 corpus2 corpus3 corpus4 corpus5
V1 "example" "data" "code" "functions" "answer"
V2 "addition" "people" "example" "prompt" "help"
V3 "data" "synthetic" "easy" "relevant" "try"
V4 "how" "able" "email" "book" "question"
V5 "include" "actually" "include" "keywords" "questions"
V6 "what" "bother" "recreate" "package" "reading"
V7 "when" "consultant" "script" "posting" "answers"
V8 "are" "cut" "check" "read" "people"
V9 "avoid" "form" "data" "search" "search"
V10 "bug" "happen" "mtcars" "section" "searching"

Can you adapt that to work with your data? If not, please edit your question to more accurately show what your data look like.

R tm package create matrix of Nmost frequent terms

The term-document matrices in tm are already created as sparse matrices. Here, mydata.tdm$i and mydata.tdm$j are the vectors of indexes of the matrix and mydata.tdm$v is the related vector of frequencies. So that you can create a sparse matrix writing :

sparseMatrix(i=mydata.tdm$i, j=mydata.tdm$j, x=mydata.tdm$v)

Then you can use rowSums and link the rows, you're interested in, to the terms, they stand for, with $Terms.

Extract top features by frequency per document from a dtm in R

Using Quanteda:

library(quanteda)
txt <- c("hello world world fizz", "foo bar bar buzz")
dfm <- dfm(txt)
topfeatures(dfm, n = 2, groups = seq_len(ndoc(dfm)))
# $`1`
# world hello
# 2 1
#
# $`2`
# bar foo
# 2 1

You can also convert between DocumentTermMatrix and dfm.

Or using the classical tm:

library(tm)
packageVersion("tm")
# [1] ‘0.7.1’
txt <- c(doc1="hello world world", doc2="foo bar bar fizz buzz")
dtm <- DocumentTermMatrix(Corpus(VectorSource(txt)))
n <- 5
(top <- findMostFreqTerms(dtm, n = n))
# $doc1
# world hello
# 2 1
#
# $doc2
# bar buzz fizz foo
# 2 1 1 1
do.call(rbind, lapply(top, function(x) { x <- names(x);length(x)<-n;x }))
# [,1] [,2] [,3] [,4] [,5]
# doc1 "world" "hello" NA NA NA
# doc2 "bar" "buzz" "fizz" "foo" NA

findMostFreqTerms is available since tm version 0.7-1.

Counting words in a single document from corpus in R and putting it in dataframe

If your data are in a Document Term Matrix, you'd use tm::findFreqTerms to get the most used terms in a document. Here's a reproducible example:

require(tm)
data(crude)
dtm <- DocumentTermMatrix(crude)
dtm
A document-term matrix (20 documents, 1266 terms)

Non-/sparse entries: 2255/23065
Sparsity : 91%
Maximal term length: 17
Weighting : term frequency (tf)

# find most frequent terms in all 20 docs
findFreqTerms(dtm, 2, 100)

# find the doc names
dtm$dimnames$Docs
[1] "127" "144" "191" "194" "211" "236" "237" "242" "246" "248" "273" "349" "352" "353" "368" "489" "502"
[18] "543" "704" "708"

# do freq words on one doc
findFreqTerms(dtm[dtm$dimnames$Docs == "127"], 2, 100)
[1] "crude" "cut" "diamond" "dlrs" "for" "its" "oil" "price"
[9] "prices" "reduction" "said." "that" "the" "today" "weak"

Here's how you'd find the most frequent words for each doc in the dtm, one document at a time:

# find freq words for each doc, one by one
list_freqs <- lapply(dtm$dimnames$Docs,
function(i) findFreqTerms(dtm[dtm$dimnames$Docs == i], 2, 100))

list_freqs
[[1]]
[1] "crude" "cut" "diamond" "dlrs" "for" "its" "oil" "price"
[9] "prices" "reduction" "said." "that" "the" "today" "weak"

[[2]]
[2] "\"opec" "\"the" "15.8" "ability" "above" "address" "agreement"
[8] "analysts" "and" "before" "bpd" "but" "buyers" "current"
[15] "demand" "emergency" "energy" "for" "has" "have" "higher"
[22] "hold" "industry" "its" "keep" "market" "may" "meet"
[29] "meeting" "mizrahi" "mln" "must" "next" "not" "now"
[36] "oil" "opec" "organization" "prices" "problem" "production" "said"
[43] "said." "set" "that" "the" "their" "they" "this"
[50] "through" "will"

[[3]]
[3] "canada" "canadian" "crude" "for" "oil" "price" "texaco" "the"

[[4]]
[4] "bbl." "crude" "dlrs" "for" "price" "reduced" "texas" "the" "west"

[[5]]
[5] "and" "discounted" "estimates" "for" "mln" "net" "pct" "present"
[9] "reserves" "revenues" "said" "study" "that" "the" "trust" "value"

[[6]]
[6] "ability" "above" "ali" "and" "are" "barrel."
[7] "because" "below" "bpd" "bpd." "but" "daily"
[13] "difficulties" "dlrs" "dollars" "expected" "for" "had"
[19] "has" "international" "its" "kuwait" "last" "local"
[25] "march" "markets" "meeting" "minister" "mln" "month"
[31] "official" "oil" "opec" "opec\"s" "prices" "producing"
[37] "pumping" "qatar," "quota" "referring" "said" "said."
[43] "sheikh" "such" "than" "that" "the" "their"
[49] "they" "this" "was" "were" "which" "will"

[[7]]
[7] "\"this" "and" "appears" "are" "areas" "bank"
[7] "bankers" "been" "but" "crossroads" "crucial" "economic"
[13] "economy" "embassy" "fall" "for" "general" "government"
[19] "growth" "has" "have" "indonesia\"s" "indonesia," "international"
[25] "its" "last" "measures" "nearing" "new" "oil"
[31] "over" "rate" "reduced" "report" "say" "says"
[37] "says." "sector" "since" "the" "u.s." "was"
[43] "which" "with" "world"

[[8]]
[8] "after" "and" "deposits" "had" "oil" "opec" "pct" "quotes"
[9] "riyal" "said" "the" "were" "yesterday."

[[9]]
[9] "1985/86" "1986/87" "1987/88" "abdul-aziz" "about" "and" "been"
[8] "billion" "budget" "deficit" "expenditure" "fiscal" "for" "government"
[15] "had" "its" "last" "limit" "oil" "projected" "public"
[22] "qatar," "revenue" "riyals" "riyals." "said" "sheikh" "shortfall"
[29] "that" "the" "was" "would" "year" "year's"

[[10]]
[10] "15.8" "about" "above" "accord" "agency" "ali" "among" "and"
[9] "arabia" "are" "dlrs" "for" "free" "its" "kuwait" "market"
[17] "market," "minister," "mln" "nazer" "oil" "opec" "prices" "producing"
[25] "quoted" "recent" "said" "said." "saudi" "sheikh" "spa" "stick"
[33] "that" "the" "they" "under" "was" "which" "with"

[[11]]
[11] "1.2" "and" "appeared" "arabia's" "average" "barrel." "because" "below"
[9] "bpd" "but" "corp" "crude" "december" "dlrs" "export" "exports"
[17] "february" "fell" "for" "four" "from" "gulf" "january" "january,"
[25] "last" "mln" "month" "month," "neutral" "official" "oil" "opec"
[33] "output" "prices" "production" "refinery" "said" "said." "saudi" "sell"
[41] "sources" "than" "the" "they" "throughput" "week" "yanbu" "zone"

[[12]]
[12] "and" "arab" "crude" "emirates" "gulf" "ministers" "official" "oil"
[9] "states" "the" "wam"

[[13]]
[13] "accord" "agency" "and" "arabia" "its" "nazer" "oil" "opec" "prices" "saudi" "the"
[12] "under"

[[14]]
[14] "crude" "daily" "for" "its" "oil" "opec" "pumping" "that" "the" "was"

[[15]]
[15] "after" "closed" "new" "nuclear" "oil" "plant" "port" "power" "said" "ship"
[11] "the" "was" "when"

[[16]]
[16] "about" "and" "development" "exploration" "for" "from" "help"
[8] "its" "mln" "oil" "one" "present" "prices" "research"
[15] "reserve" "said" "strategic" "the" "u.s." "with" "would"

[[17]]
[17] "about" "and" "benefits" "development" "exploration" "for" "from"
[8] "group" "help" "its" "mln" "oil" "one" "policy"
[15] "present" "prices" "protect" "research" "reserve" "said" "strategic"
[22] "study" "such" "the" "u.s." "with" "would"

[[18]]
[18] "1.50" "company" "crude" "dlrs" "for" "its" "lowered" "oil" "posted" "prices"
[11] "said" "said." "the" "union" "west"

[[19]]
[19] "according" "and" "april" "before" "can" "change" "efp"
[8] "energy" "entering" "exchange" "for" "futures" "has" "hold"
[15] "increase" "into" "mckiernan" "new" "not" "nymex" "oil"
[22] "one" "position" "prices" "rule" "said" "spokeswoman." "that"
[29] "the" "traders" "transaction" "when" "will"

[[20]]
[20] "1986," "1987" "billion" "cubic" "fiscales" "january" "mln"
[8] "pct" "petroliferos" "yacimientos"

If you want this output in a dataframe, you can do this:

# from here http://stackoverflow.com/a/7196565/1036500
L <- list_freqs
cfun <- function(L) {
pad.na <- function(x,len) {
c(x,rep(NA,len-length(x)))
}
maxlen <- max(sapply(L,length))
do.call(data.frame,lapply(L,pad.na,len=maxlen))
}
# make dataframe of words (but probably you want words as rownames and cells with counts?)
tab_freqa <- cfun(L)

But if you want to plot 'doc 1 high freq terms vs doc 2 high freq terms', then we'll need a different approach...

# convert dtm to matrix
mat <- as.matrix(dtm)

# make data frame similar to "3 columns 'Terms',
# 'Series x', 'Series Y'. With series x and y
# having the number of times that word occurs"
cb <- data.frame(doc1 = mat['127',], doc2 = mat['144',])

# keep only words that are in at least one doc
cb <- cb[rowSums(cb) > 0, ]

# plot
require(ggplot2)
ggplot(cb, aes(doc1, doc2)) +
geom_text(label = rownames(cb),
position=position_jitter())

Or perhaps slightly more efficiently, we can make one big dataframe of all the docs and make plots from that:

# this is the typical method to turn a 
# dtm into a df...
df <- as.data.frame(as.matrix(dtm))
# and transpose for plotting
df <- data.frame(t(df))
# plot
require(ggplot2)
ggplot(df, aes(X127, X144)) +
geom_text(label = rownames(df),
position=position_jitter())

After you remove stopwords this will look better, but this is a good proof of concept. Is that what you were after?

Sample Image

Find frequency of a custom word in R TermDocumentMatrix using TM package

Since you have not given a reproducible example, I will give one using the crude dataset available in the tm package.

You can do it in (at least) 2 different ways. But anything that turns a sparse matrix into a dense matrix can use a lot of memory. So I will give you 2 options. The first one is more memory friendly as it makes use of the sparse tdm matrix. The second one, first transforms the tdm into a dense matrix before creating a frequency vector.

library(tm)
data("crude")
crude <- as.VCorpus(crude)
crude <- tm_map(crude, stripWhitespace)
crude <- tm_map(crude, removePunctuation)
crude <- tm_map(crude, content_transformer(tolower))
crude <- tm_map(crude, removeWords, stopwords("english"))

tdm <- TermDocumentMatrix(crude)

# Making use of the fact that a tdm or dtm is a simple_triplet_matrix from slam
my_func <- function(data, word){
slam::row_sums(data[data$dimnames$Terms == word, ])
}

my_func(tdm, "crude")
crude
21
my_func(tdm, "oil")
oil
85

# turn tdm into dense matrix and create frequency vector.
freq <- rowSums(as.matrix(tdm))
freq["crude"]
crude
21
freq["oil"]
oil
85

edit:
As requested in comment:

# all words starting with cru. Adjust regex to find what you need.
freq[grep("^cru", names(freq))]
crucial crude
2 21

# separate words
freq[c("crude", "oil")]
crude oil
21 85


Related Topics



Leave a reply



Submit