Dear R Programmers,

There is new package “colbycol” on CRAN, which makes our jobs easier when we have large files i.e. more than a GB to be read in R. Especially, when we don’t need all of the columns/variables for our analysis. Kudos for author, Carlos J. Gil Bellosta.

I have tried it on a 1.72 GB data, where in my main interest was “few columns” where it has more 300 columns and 500,000 rows. Since, it is easy to know about how many columns exist by reading few lines of data (also refer to my earlier post http://costaleconomist.blogspot.in/2010/02/easy-way-of-determining-number-of.html and ?readLines), R job of getting what I want was completed with few lines as below (and also in quicker time):

library(colbycol)

cbc.data.7.cols <- cbc.read.table(“D:/XYZ/filename.csv”, just.read = c(1, 3, 21, 34, 108, 205, 227), sep = “,”)

nrow(cbc.data.7.cols)

colnames(cbc.data.7.cols)

# then on can convert simply to data.frame as follows

train.data <- as.data.frame(cbc.data.7.cols, columns = 1:7, rows = 1:50000)

Also, refer to http://colbycol.r-forge.r-project.org/ for quick intro by author.

Have a nice programming with R.

*Related*

To

**leave a comment** for the author, please follow the link and comment on his blog:

** Econometrics_Help**.

R-bloggers.com offers

**daily e-mail updates** about

R news and

tutorials on topics such as: visualization (

ggplot2,

Boxplots,

maps,

animation), programming (

RStudio,

Sweave,

LaTeX,

SQL,

Eclipse,

git,

hadoop,

Web Scraping) statistics (

regression,

PCA,

time series,

trading) and more...

If you got this far, why not

__subscribe for updates__ from the site? Choose your flavor:

e-mail,

twitter,

RSS, or

facebook...