Advent of 2020, Day 11 – Using Azure Databricks Notebooks with R Language for data analytics

[This article was first published on R – TomazTsql, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Series of Azure Databricks posts:

We looked into SQL language and how to get some basic data preparation done. Today we will look into R and how to get started with data analytics.

Creating a data.frame (or getting data from SQL Table)

Create a new notebook (Name: Day11_R_AnalyticsTasks, Language: R) and let’s go. Now we will get data from SQL tables and DBFS files.

We will be using a database from Day10 and the table called temperature.

USE Day10;

SELECT * FROM temperature

For getting SQL query result into R data.frame, we will use SparkR package.


Getting Query results in R data frame (using SparkR R library)

temp_df <- sql("SELECT * FROM temperature")

With this temp_df data.frame we can start using R or SparkR functions. For example viewing the content of the data.frame.


This is a SparkR data.frame. you can aslo create a R data.frame by using function.

df <-

Creating standard R data.frame and it can be used with any other R packages.

Importing CSV file into R data.frame

Another way to get data into R data.frame is to feed data from CSV file. And in this case, SparkR library will again come in handy. Once data in data.frame, it can be used with other R libraries.

Day6 <- read.df("dbfs:/FileStore/Day6Data_dbfs.csv", source = "csv", header="true", inferSchema = "true")

Doing simple analysis and visualisations

Once data is available in data.frame and it can be used for analysis and visualisations. Let’s load ggplot2.

p <- ggplot(df, aes(date, mean_daily_temp)) 
p <- p + geom_jitter() + facet_wrap(~city)

And make the graph smaller and give it a theme.

options(repr.plot.height = 500, repr.plot.res = 120)
p + geom_point(aes(color = city)) + geom_smooth() + 

Once again, we can use other data wrangling packages. Both dplyr and ggplot2 are preinstalled on Databricks Cluster.


When you load a library, nothing might be returned as a result. In case of warning, Databricks will display the warnings. Dplyr package can be used as any other package absolutely normally, without any limitations.

df %>%
  dplyr::group_by(city) %>%
       n = dplyr::n()
      ,mean_pos = mean(as.integer(df$mean_daily_temp))
#%>% dplyr::filter( as.integer(df$date) > "2020/12/01")

But note(!), dplyr functions might not work, and it is due to the collision of function names with SparkR library. SparkR has same functions (arrange, between, coalesce, collect, contains, count, cume_dist,
dense_rank, desc, distinct, explain, filter, first, group_by, intersect, lag, last, lead, mutate, n, n_distinct, ntile,
percent_rank, rename, row_number, sample_frac, select, sql, summarize, union)
. In other to solve this collision, either detach (detach(“package:dplyr”)) the dplyr package, or we instance the package by: dplyr::summarise instead of just summarise.

Creating a simple linear regression

We can also use many of the R packages for data analysis, and in this case I will run simple regression, trying to predict the daily temperature. Simply run the regression function lm().

model <- lm(mean_daily_temp ~ city + date, data = df)

And run base r function summary() to get model insights.


In addition, you can directly install any missing or needed package in notebook (R engine and Databricks Runtime environment version should be applied). In this case, I am running a residualPlot() function from extra installed package car.


Azure Databricks will generate RMarkdown notebook when using R Language as Kernel language. If you want to create a IPython notebook, make Python as Kernel language and use %r for switching to R Language. Both RMarkdown notebook and HTML file (with included results) are included and available on Github.

Tomorrow we will check and explore how to use Python to do data engineering, but mostly the data analysis tasks. So, stay tuned.

Complete set of code and Notebooks will be available at the Github repository.

Happy Coding and Stay Healthy!

To leave a comment for the author, please follow the link and comment on their blog: R – TomazTsql. offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)