How to bulk upload your data from R into Redshift

[This article was first published on R - Data Science Heroes Blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Amazon’s columnar database, Redshift is a great companion for a lot of Data Science tasks, it allows for fast processing of very big datasets, with a familiar query language (SQL).

There are 2 ways to load data into Redshift, the classic one, using the INSERT statement, works, but it is highly inefficient when loading big datasets. The other one, recommended in Redshift’s docs, consists on using the COPY statement.

One of the easiests ways to accomplish this, since we are already using Amazon’s infrastructure, is to do a load from S3. S3 loading requires that you upload your data to Redshift and then run a COPY statement specifying where your data is.

Also, because Redshift is a distributed database, they recommend you to split your file, in a number of files which are a multiple of the number of slices on your database, so they can load it in parallel, and they also let you gzip your files for a faster upload.

Wait a second, now to upload a big dataset fast we have to:

  • Create a table in Redshift with the same structure as my data frame
  • Split the data into N parts
  • Convert the parts into a format readable by Redshift
  • Upload all the parts to Amazon S3
  • Run the COPY statement on Redshift
  • Delete the temporary files on Amazon S3

That does seem like a lot of work, but don’t worry, i’ve got your back! I’ve created an R Package which does exactly this, it’s redshiftTools! The code is available at

To install the package, you’ll need to do:

    install.packages("aws.s3", repos = c(getOption("repos"), ""))

Afterwards, you’ll be able to use it like this:


    con <- dbConnect(RPostgres::Postgres(), dbname="dbname",
    host='', port='5439',
    user='myuser', password='mypassword',sslmode='require')

    rs_replace_table(my_data, dbcon=con, tableName='mytable', bucket="mybucket")
    rs_upsert_table(my_other_data, dbcon=con, tableName = 'mytable', bucket="mybucket", keys=c('id', 'date'))

rs_replace_table truncates the target table and then loads it entirely from the data frame, only do this if you don't care about the current data it holds. On the other hand, rs_upsert_table replaces rows which have coinciding keys, and inserts those that do not exist in the table.

Please open an issue in Github if you find any issues. Have fun!

Data Science Heroes Twitter Data Science Heroes Facebook Data Science Heroes Blog

To leave a comment for the author, please follow the link and comment on their blog: R - Data Science Heroes Blog. offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)