How to check if a file exists with HTTP and R

September 1, 2010
By

(This article was first published on Gosset's student » R, and kindly contributed to R-bloggers)

So, there’s probably an easier way to do this (please let me know if you know it)…

Suppose you’re working with a system which creates (binary) files and posts them for download on a website. You know the names of the files that will be created. However, they may not have been made yet (they’re generated on the fly, and appear in a vaguely random order over time). There are several of them and you want to know which ones are there yet, and when there are enough uploaded, run an analysis.

I spent quite a bit of time trying to work this out, and eventually came up with the following solution:

require(RCurl)
newurl <- c("http://cran.r-project.org/web/packages/RCurl/RCurl.pdf",
            "http://cran.r-project.org/web/packages/RCurl/RCurl2.pdf")
for (n in 2:1){
   z <- ""
   try(z <- getBinaryURL(newurl[n], failonerror = TRUE))   
   if (length(z) > 1) {print(paste(newurl[n], " exists", sep = ""))
      } else {print(paste(newurl[n], " doesn't exist", sep =  ""))}
   }

What this does is uses RCurl to download the file into a variable z. Then your system will check to see if z now contains the file.

If the file doesn’t exist, getBinaryURL() returns an error, and your loop (if you are doing several files) will quit. Wrapping the getBinaryURL() in try() means that the error won’t stop the loop from trying the next file (if you don’t trust me, try doing the above without the try wrapper). You can see how wrapping this in a loop could quickly go through several files and download ones which exist.

I’d really like to be able to do this, but not actually download the whole file (e.g. just the first 100 bytes) to see how many files of interest have been created, and if enough have, then download them all. I just can’t work out how to yet – I tried the range option of getBinaryURL() but this just crashed R. This would be useful if you are collecting data in real time, and you know you need at least (for example) 80% of the data to be available before you jump into a computationally expensive algorithm.

So, there must be an easier way to do all this, but can I find it? …


Tagged: Curl, R, RCurl, statistics

To leave a comment for the author, please follow the link and comment on his blog: Gosset's student » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...



If you got this far, why not subscribe for updates from the site? Choose your flavor: e-mail, twitter, RSS, or facebook...

Comments are closed.