The Azure Data Lake store is an Apache Hadoop file system compatible with HDFS, hosted and managed in the Azure Cloud. You can store and access the data within directly via the API, by connecting the filesystem directly to Azure HDInsight services, or via HDFS-compatible open-source applications. And for data science applications, you can also access the data directly from R, as this tutorial explains.
To interface with Azure Data Lake, you'll use U-SQL, a SQL-like language extensible using C#. The R Extensions for U-SQL allow you to reference an R script from a U-SQL statement, and pass data from Data Lake into the R Script. There's a 500Mb limit for the data passed to R, but the basic idea is that you perform the main data munging tasks in U-SQL, and then pass the prepared data to R for analysis. With this data you can use any function from base R or any R package. (Several common R packages are provided in the environment, or you can upload and install other packages directly, or use the checkpoint package to install everything you need.) The R engine used is R 3.2.2.
The tutorial walks you through the following stages:
- Exercise 1 – Inline R code.
- Exercise 2 – Deploy R script as resource.
- Exercise 3 – Check the R environment in ADLA. Install the magrittr package.
- Exercise 4 – Using checkpoint to create a zip file containing required packages and all the dependencies. Demonstrate how to deploy and use dplyr package.
- Exercise 5 – Understanding how partitioning works in ADLA. Specifically the REDUCE operation.
- Exercise 6 – About rReturnType as “charactermatrix”.
- Exercise 7 – Save the trained model. Deploy trained model for scoring.
- Exercise 8 – Use R scripts as combiners (using COMBINE and Extension.R.Combiner).
- Exercise 9 – Use rxDTree() from RevoScaleR package.
- Advanced – More complex examples.
Cortana Intelligence Gallery: Getting Started with using Azure Data Lake Analytics with R – A Tutorial