**The Schmitt-R**, and kindly contributed to R-bloggers)

The wordcloud package for R is great, but all the examples I found used the tm package to process a large amount of textual data (web pages, text files, google docs, etc.)

But what if you have normalized data where you have a word and its frequency? Or, what if you have phrases that you want in a wordcloud? One example being terms which users have entered into a web search.

I happen to be pulling from a data source via PHP and then I output the data to CSV format in descending order by frequency.

#### The relevant part of the PHP script (after populating the array $terms):

$cwd = getcwd();

$local_path = $cwd.’/csv/';

$filename = $local_path.’searchterms.csv';

$fp = fopen($filename, ‘w’);

fputcsv($fp, array(‘term’,’freq’));

arsort($terms); //reverse sort array by values

$max_terms = 100;

$i = 0;

foreach ($terms as $q => $v) {

$i++;

if ($v > $min_freq) fputcsv($fp, array($q,$v));

if ($i > $max_terms) break;

}

fclose($fp);

#### Here is the sample data:

#### The R script:

One consideration is that if a search phrase is too long, R will produce a warning and omit it from the resulting wordcloud, so you need to compensate with the image dimensions. It may be possible to dynamically scale the image based on the string length of the highest frequency result.

#### Here is the resulting wordcloud:

For more on R, visit http://www.r-bloggers.com/

**leave a comment**for the author, please follow the link and comment on their blog:

**The Schmitt-R**.

R-bloggers.com offers

**daily e-mail updates**about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...