Site icon R-bloggers

Example 8.27: using regular expressions to read data with variable number of words in a field

[This article was first published on SAS and R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
A more or less anonymous reader commented on our last post, where we were reading data from a file with a varying number of fields. The format of the file was:

1 Las Vegas, NV --- 53.3 --- --- 1
2 Sacramento, CA --- 42.3 --- --- 2

The complication in the number of fields related to spaces in the city field (which could vary from one to three words).

The reader’s elegant solution took full advantage of R’s regular expressions: a powerful and concise language for processing text.
file <- readLines("http://www.math.smith.edu/r/data/ella.txt")
file <- gsub("^([0-9]* )(.*),( .*)$", "\\1'\\2'\\3", file)
tc <- textConnection(file)
processed <- read.table(tc, sep=" ", na.string="---")
close(tc) 

The main work is done by the gsub() function, which processes each line of the input file and puts the city values in quotes (so that it is seen as a single field when read.table() is run.

While not straightforward to parse, the regular expression pattern can be broken into parts. The string ^([0-9]* ) matches any numbers (characters 0-9) at the beginning of the line (which is indicated by the "^"), followed by a space. The "*" means that there may be more than one such 0-9 character included. The string (.*), matches any number of characters followed by a comma, while the last pattern matches any characters after the next space to the end of the line. After the comma (between the quotes) the user gives the characters to replace the found character strings with. To replicate the data found between the parens, we can use the "\\n" syntax; the fact that the comma in the second clause "(.*)," is outside the parens means that it is not replicated.

It may be slightly easier to understand the code if we note that the third clause is unnecessary and split the remaining clauses into two separate gsub() commands, as follows.
file <- readLines("http://www.math.smith.edu/r/data/ella.txt")
file <- gsub("^([0-9]* )", "\\1'", file)
file <- gsub("(.*),", "\\1'", file)
tc <- textConnection(file)
processed <- read.table(tc, sep=" ", na.string="---")
close(tc) 

The first two elements of the file vector become:
"1 'Las Vegas' NV --- 53.3 --- --- 1"        
"2 'Sacramento' CA --- 42.3 --- --- 2"     

The use of the na.string option to read.table() is a more appropriate approach to recoding the missing values than we used previously. Overall, we're impressed with the commenter's use of regular expressions in this example, and are thinking more about Nolan and Temple Lang's focus on them as part of a modern statistical computing curriculum.

To leave a comment for the author, please follow the link and comment on their blog: SAS and R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.