Big Data/Analytics Zone is brought to you in partnership with:

Kay Cichini (MSc, Biology), Nature Protection Expert, Tyrol/Austria. I'm author of theBioBucket-Blog, where I share examples for data management, data processing, statistical analysis and report generation using the R-Software. Kay is a DZone MVB and is not an employee of DZone and has posted 28 posts at DZone. You can read more from them at their website. View Full User Profile

Text Mining with R: Comparing Word Counts in Two Text Documents

11.29.2013
| 7064 views |
  • submit to reddit

Here's what I came up with to compare word counts in two pieces of text. If you've got any ideas, I'd love to learn about alternatives!

## a function that compares word counts in two texts
wordcount <- function(x, y, stem = F, minlen = 1, marg = F) {
 
                        require(tm)
 
                        x_clean <- unlist(strsplit(removePunctuation(x), "\\s+"))
                        y_clean <- unlist(strsplit(removePunctuation(y), "\\s+"))
 
                        x_clean <- tolower(x_clean[nchar(x_clean) >= minlen])
                        y_clean <- tolower(y_clean[nchar(y_clean) >= minlen])
 
                        if ( stem == T ) {
 
                          x_stem <- stemDocument(x_clean)
                          y_stem <- stemDocument(y_clean)
                          x_tab <- table(x_stem)
                          y_tab <- table(y_stem)   
 
                          cnam <- sort(unique(c(names(x_tab), names(y_tab))))
 
                          z <- matrix(rep(0, 3*(length(cnam)+1)), 3, length(cnam)+1, dimnames=list(c("x", "y", "rowsum"), c(cnam, "colsum")))
                          z["x", names(x_tab)] <- x_tab
                          z["y", names(y_tab)] <- y_tab
                          z["rowsum",] <- colSums(z)
                          z[,"colsum"] <- rowSums(z)
                          ifelse(marg == T, return(t(z)), return(t(z[1:dim(z)[1]-1, 1:dim(z)[2]-1])))
 
                          } else {
 
                          x_tab <- table(x_clean)
                          y_tab <- table(y_clean)   
 
                          cnam <- sort(unique(c(names(x_tab), names(y_tab))))
 
                          z <- matrix(rep(0, 3*(length(cnam)+1)), 3, length(cnam)+1, dimnames=list(c("x", "y", "rowsum"), c(cnam, "colsum")))
                          z["x", names(x_tab)] <- x_tab
                          z["y", names(y_tab)] <- y_tab
                          z["rowsum",] <- colSums(z)
                          z[,"colsum"] <- rowSums(z)
                          ifelse(marg == T, return(t(z)), return(t(z[1:dim(z)[1]-1, 1:dim(z)[2]-1])))
                          }
                        }
 
## example
x = "Hello new, new world, this is one of my nice text documents - I wrote it today"
y = "Good bye old, old world, this is a nicely and well written text document"
 
wordcount(x, y, stem = T, minlen = 3, marg = T)

Follow-Up:

Thanks a lot for the comments! As I'm not that much into text mining, I was trying to reinvent the wheel (in a rather dilettante manner), missing the capabilities of existing packages. Here's the shortest code that I was able to find doing the same thing (with the potential to get out much more of it, if desired).

x = "Hello new, new world, this is one of my nice text documents"
y = "Good bye old, old world, this is a text document"
z = "Good bye old, old world, this is a text document with WORDS for STEMMING  - BTW, what is the stem of irregular verbs like write, wrote, written?"
 
# make a corpus with two or more documents (the cool thing here is that it could be endless (almost) numbers
# of documents to be cross tabulated with the used terms. And the control function enables you
# to do lots of tricks with it before it will be tabulated, see ?termFreq, i.e.)
 
xyz <- as.list(c(x,y,z))
xyz_corp <- Corpus(VectorSource(xyz))
 
cntr <- list(removePunctuation = T, stemming = T, wordLengths = c(3, Inf))
 
as.matrix(TermDocumentMatrix(xyz_corp, control = cntr))


Published at DZone with permission of Kay Cichini, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)