**Anything but R-bitrary**, and kindly contributed to R-bloggers)

…or your money back.

`author = "Ben Ogorek"`

Twitter = "@baogorek"

email = paste0(sub("@", "", Twitter), "@gmail.com")

## Setup

Pretend this is Big Data:

`doc1 <- "Stray cats are running all over the place. I see 10 a day!"`

doc2 <- "Cats are killers. They kill billions of animals a year."

doc3 <- "The best food in Columbus, OH is the North Market."

doc4 <- "Brand A is the best tasting cat food around. Your cat will love it."

doc5 <- "Buy Brand C cat food for your cat. Brand C makes healthy and happy cats."

doc6 <- "The Arnold Classic came to town this weekend. It reminds us to be healthy."

doc7 <- "I have nothing to say. In summary, I have told you nothing."

and this is the Big File System:

`doc.list <- list(doc1, doc2, doc3, doc4, doc5, doc6, doc7)`

N.docs <- length(doc.list)

names(doc.list) <- paste0("doc", c(1:N.docs))

You have an information need that is expressed via the following text query:

`query <- "Healthy cat food"`

How will you meet your information need amidst all this unstructured text?

Jokes aside, we're going to use an old method, one that's tried and true, one that goes way back to the 1960's. We're going to implement the vector space model of information retrieval in R. In the process, you'll hopefully learn something about the *tm* package and about the analysis of unstructured data before it was Big.

## Text mining in R

### Fundamentals of the *tm* package

If you have not installed the *tm* [1][2] and *Snowball* [3] packages, please do so now.

`install.packages("tm")`

install.packages("Snowball")

Load the *tm* package into memory.

`library(tm)`

In text mining and related fields, a corpus is a collection of texts, often with extensive manual annotation. Not surprisingly, the *Corpus* class is a fundamental data structure in *tm*.

`my.docs <- VectorSource(c(doc.list, query))`

my.docs$Names <- c(names(doc.list), "query")

my.corpus <- Corpus(my.docs)

my.corpus

`## A corpus with 8 text documents`

Above we treated the query like any other document. It is, after all, just another string of text. Queries are not typically known a priori, but in the processing steps that follow, we will pretend like we knew ours in advance to avoid repeating steps.

### Standardizing

One of the nice things about the Corpus class is the *tm_map* function, which cleans and standardizes documents within a Corpus object. Below are some of the transformations.

`getTransformations()`

`## [1] "as.PlainTextDocument" "removeNumbers" "removePunctuation" `

## [4] "removeWords" "stemDocument" "stripWhitespace"

First, let's get rid of punctuation.

`my.corpus <- tm_map(my.corpus, removePunctuation)`

my.corpus$doc1

`## Stray cats are running all over the place I see 10 a day`

Suppose we don't want to count “cats” and “cat” as two separate words. Then we will use the *stemDocument* transformation to implement the famous Porter Stemmer algorithm. To use this particular transformation, first load the *Snowball* package.

`library(Snowball)`

my.corpus <- tm_map(my.corpus, stemDocument)

my.corpus$doc1

`## Stray cat are run all over the place I see 10 a day`

Finally, remove numbers and any extra white space.

`my.corpus <- tm_map(my.corpus, removeNumbers)`

my.corpus <- tm_map(my.corpus, tolower)

my.corpus <- tm_map(my.corpus, stripWhitespace)

my.corpus$doc1

`## stray cat are run all over the place i see a day`

We applied all these standardization techniques without much thought. For instance, we sacrificed inflection in favor of fewer words. But at least the transformations make sense on a heuristic level, much like the similarity concepts to follow.

## The vector space model

### Document similarity

Here's a trick that's been around for a while: represent each document as a vector in \( \mathcal{R}^N \) (with \( N \) as the number of words) and use the angle \( \theta \) between the vectors as a similarity measure. Rank by the similarity of each document to the query and you have a search engine.

One of the simplest things we can do is to count words within documents. This naturally forms a two dimensional structure, the *term document matrix*, with rows corresponding to the words and the columns corresponding to the documents. As with any matrix, we may think of a term document matrix as a collection of column vectors existing in a space defined by the rows. The query lives in this space as well, though in practice we wouldn't know it beforehand.

`term.doc.matrix.stm <- TermDocumentMatrix(my.corpus)`

inspect(term.doc.matrix.stm[0:14, ])

`## A term-document matrix (14 terms, 8 documents)`

##

## Non-/sparse entries: 21/91

## Sparsity : 81%

## Maximal term length: 8

## Weighting : term frequency (tf)

##

## Docs

## Terms doc1 doc2 doc3 doc4 doc5 doc6 doc7 query

## all 1 0 0 0 0 0 0 0

## and 0 0 0 0 1 0 0 0

## anim 0 1 0 0 0 0 0 0

## are 1 1 0 0 0 0 0 0

## arnold 0 0 0 0 0 1 0 0

## around 0 0 0 1 0 0 0 0

## best 0 0 1 1 0 0 0 0

## billion 0 1 0 0 0 0 0 0

## brand 0 0 0 1 2 0 0 0

## buy 0 0 0 0 1 0 0 0

## came 0 0 0 0 0 1 0 0

## cat 1 1 0 2 3 0 0 1

## classic 0 0 0 0 0 1 0 0

## columbus 0 0 1 0 0 0 0 0

### Sparsity and storage of the term document matrix

The matrices in *tm* are of type *Simple Triplet Matrix* where only the triples \( (i, j, value) \) are stored for non-zero values. To work directly with these objects, you may use install the *slam* [4] package. We bear some extra cost by making the matrix “dense” (i.e., storing all the zeros) below.

`term.doc.matrix <- as.matrix(term.doc.matrix.stm)`

cat("Dense matrix representation costs", object.size(term.doc.matrix), "bytes.\n",

"Simple triplet matrix representation costs", object.size(term.doc.matrix.stm),

"bytes.")

`## Dense matrix representation costs 6688 bytes.`

## Simple triplet matrix representation costs 5808 bytes.

### Variations on a theme

In *term.doc.matrix*, the dimensions of the document space are simple term frequencies. This is fine, but other heuristics are available. For instance, rather than a linear increase in the term frequency \( tf \), perhaps \( \sqrt(tf) \) or \( \log(tf) \) would provide a more reasonable diminishing returns on word counts within documents.

Rare words can also get a boost. The word “healthy” appears in only one document, whereas “cat” appears in four. A word's document frequency \( df \) is the number of documents that contain it, and a natural choice is to weight words inversely proportional to their \( df \)s. As with term frequency, we may use logarithms or other transformations to achieve the desired effect.

The *tm* function *weightTfIdf* offers one variety of tfidf weighting, but below we build our own. Visit the Wikipedia page for the SMART Information Retrieval System for a brief history and a list of popular weighting choices.

Different weighting choices are often made for the query and the documents. For instance, Manning et al.'s worked example [5] uses \( idf \) weighting only for the query.

### Choice and implementation

For both the document and query, we choose tfidf weights of \( (1 + \log_2(tf)) \times \log_2(N/df) \), which are defined to be \( 0 \) if \( tf = 0 \). Note that whenever a term does not occur in a specific document, or when it appears in every document, its weight is zero.

We implement this weighting function across entire rows of the term document matrix, and therefore our tfidf function must take a term frequency vector and a document frequency scalar as inputs.

`get.tf.idf.weights <- function(tf.vec, df) {`

# Computes tfidf weights from a term frequency vector and a document

# frequency scalar

weight = rep(0, length(tf.vec))

weight[tf.vec > 0] = (1 + log2(tf.vec[tf.vec > 0])) * log2(N.docs/df)

weight

}

cat("A word appearing in 4 of 6 documents, occuring 1, 2, 3, and 6 times, respectively: \n",

get.tf.idf.weights(c(1, 2, 3, 0, 0, 6), 4))

`## A word appearing in 4 of 6 documents, occuring 1, 2, 3, and 6 times, respectively: `

## 0.8074 1.615 2.087 0 0 2.894

Using *apply*, we run the tfidf weighting function on every row of the term document matrix. The document frequency is easily derived from each row by the counting the non-zero entries (not including the query).

`get.weights.per.term.vec <- function(tfidf.row) {`

term.df <- sum(tfidf.row[1:N.docs] > 0)

tf.idf.vec <- get.tf.idf.weights(tfidf.row, term.df)

return(tf.idf.vec)

}

tfidf.matrix <- t(apply(term.doc.matrix, c(1), FUN = get.weights.per.term.vec))

colnames(tfidf.matrix) <- colnames(term.doc.matrix)

tfidf.matrix[0:3, ]

`## `

## Terms doc1 doc2 doc3 doc4 doc5 doc6 doc7 query

## all 2.807 0.000 0 0 0.000 0 0 0

## and 0.000 0.000 0 0 2.807 0 0 0

## anim 0.000 2.807 0 0 0.000 0 0 0

### Dot product geometry

A benefit of being in the vector space \( \mathcal{R}^N \) is the use of its dot product. For vectors \( a \) and \( b \), the geometric definition of the dot product is \( a \cdot b = \vert\vert a\vert\vert \, \vert\vert b \vert \vert \cos \theta \), where \( \vert\vert \cdot \vert \vert \) is the euclidean norm (the root sum of squares) and \( \theta \) is the angle between \( a \) and \( b \).

In fact, we can work directly with the cosine of \( \theta \). For \( \theta \) in the interval \( [-\pi, -\pi] \), the endpoints are orthogonality (totally unrelated documents) and the center, zero, is complete collinearity (maximally similar documents). We can see that the cosine decreases from its maximum value of \( 1.0 \) as the angle departs from zero in either direction.

`angle <- seq(-pi, pi, by = pi/16)`

plot(cos(angle) ~ angle, type = "b", xlab = "angle in radians", main = "Cosine similarity by angle")

We may furthermore normalize each column vector in our tfidf matrix so that its norm is one. Now the dot product *is* \( \cos \theta \).

`tfidf.matrix <- scale(tfidf.matrix, center = FALSE, scale = sqrt(colSums(tfidf.matrix^2)))`

tfidf.matrix[0:3, ]

`## `

## Terms doc1 doc2 doc3 doc4 doc5 doc6 doc7 query

## all 0.3632 0.0000 0 0 0.0000 0 0 0

## and 0.0000 0.0000 0 0 0.3486 0 0 0

## anim 0.0000 0.3923 0 0 0.0000 0 0 0

### Matrix multiplication: a dot product machine

Keeping the query alongside the other documents let us avoid repeating the same steps. But now it's time to pretend it was never there.

`query.vector <- tfidf.matrix[, (N.docs + 1)]`

tfidf.matrix <- tfidf.matrix[, 1:N.docs]

With the query vector and the set of document vectors in hand, it is time to go after the cosine similarities. These are simple dot products as our vectors have been normalized to unit length.

Recall that matrix multiplication is really just a sequence of vector dot products. The matrix operation below returns values of \( \cos \theta \) for each document vector and the query vector.

`doc.scores <- t(query.vector) %*% tfidf.matrix`

With scores in hand, rank the documents by their cosine similarities with the query vector.

`results.df <- data.frame(doc = names(doc.list), score = t(doc.scores), text = unlist(doc.list))`

results.df <- results.df[order(results.df$score, decreasing = TRUE), ]

### The results

How did our search engine do?

`options(width = 2000)`

print(results.df, row.names = FALSE, right = FALSE, digits = 2)

`## doc score text `

## doc5 0.344 Buy Brand C cat food for your cat. Brand C makes healthy and happy cats.

## doc6 0.183 The Arnold Classic came to town this weekend. It reminds us to be healthy.

## doc4 0.177 Brand A is the best tasting cat food around. Your cat will love it.

## doc3 0.115 The best food in Columbus, OH is the North Market.

## doc2 0.039 Cats are killers. They kill billions of animals a year.

## doc1 0.036 Stray cats are running all over the place. I see 10 a day!

## doc7 0.000 I have nothing to say. In summary, I have told you nothing.

Our “best” document, at least in an intuitive sense, comes out ahead with a score nearly twice as high as its nearest competitor. Notice however that this next competitor has nothing to do with cats. This is due to the relative rareness of the word “healthy” in the documents and our choice to incorporate the inverse document frequency weighting for both documents and query. Fortunately, the profoundly uninformative document 7 has been ranked dead last.

## Discussion

I joked at the beginning that our seven documents were “Big Data.” This is silly but not off the wall. Hadoop is the technology most synonymous with Big Data. What is Hadoop's most common example and use case?

Wait…

Wait for it…

It's counting words in text! Sorry if that disappoints you. To be fair, the programming paradigm used by Hadoop can be made to do many things, and even something as simple as counting words becomes troublesome with enough text and a pressing deadline (i.e., not 20 minutes). But sans deadline and gigabytes of text, we *were* counting words.

Though tfidf weighting and the vector space model may now be old school, its core concepts are still used in industrial search solutions built using Lucene. In more modern (and statistical) approaches based on probabilistic language modeling, documents are ranked by the probability that their underlying language model produced the query [6]. While there's nothing inherently statistical about the vector space model, a link to probabilistic language modeling has been demonstrated [7].

I hope you've enjoyed exploring the *tm* package and some of the clever ideas introduced by the information retrieval community. Whatever happens with “Big Data,” keep experimenting with text analysis in R!

*Acknowledgements*

The *markdown* [8] and *knitr* [9] packages, in conjunction with RStudio's IDE [10], were used to create this document. Thanks to Chris Nicholas and Shannon Terry for their comments and feedback. I first learned about information retrieval in Coursera's Stanford Natural Language Processing course taught by Dan Jurafsky and Christopher Manning. Keep up with ours and other great articles on R-Bloggers .

## References

Ingo Feinerer and Kurt Hornik (2013). tm: Text Mining Package. R package version 0.5-8.3. http://CRAN.R-project.org/package=tm

Ingo Feinerer, Kurt Hornik, and David Meyer (2008). Text Mining Infrastructure in R. Journal of Statistical Software 25(5): 1-54. URL: http://www.jstatsoft.org/v25/i05/.

Kurt Hornik (2013). Snowball: Snowball Stemmers. R package version 0.0-8. http://CRAN.R-project.org/package=Snowball

Kurt Hornik, David Meyer and Christian Buchta (2013). slam: Sparse Lightweight Arrays and Matrices. R package version 0.1-28. http://CRAN.R-project.org/package=slam

Christopher D. Manning, Prabhakar Raghavan and Hinrich Schutze, Introduction to Information Retrieval, Cambridge University Press. 2008. URL: http://www-nlp.stanford.edu/IR-book/

Hugo Zaragoza, Djoerd Hiemstra, and Michael Tipping. “Bayesian extension to the language model for ad hoc information retrieval.” Proceedings of the 26th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 2003. URL

Thorsten Joachims. A Probabilistic Analysis of the Rocchio Algorithm with TFIDF for Text Categorization. No. CMU-CS-96-118. Carnegie-Mellon University of Pittsburgh, PA. Department of Computer Science, 1996.

JJ Allaire, Jeffrey Horner, Vicent Marti and Natacha Porte (2012). markdown: Markdown rendering for R. R package version 0.5.3. http://CRAN.R-project.org/package=markdown

Yihui Xie (2012). knitr: A general-purpose package for dynamic report generation in R. R package version 0.6. http://CRAN.R-project.org/package=knitr

RStudio IDE for Windows. URL http://www.rstudio.com/ide/

R Core Team (2013). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL: http://www.R-project.org/.

**leave a comment**for the author, please follow the link and comment on his blog:

**Anything but R-bitrary**.

R-bloggers.com offers

**daily e-mail updates**about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...