Introduction to Rolling Volatility
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
This is the second post in our series on portfolio volatility, variance and standard deviation. If you missed the first post and want to start at the beginning with calculating portfolio volatility, have a look here – Introduction to Volatility. We will use three objects created in that previous post, so a quick peek is recommended.
Today we focus on two tasks:
Calculate the rolling standard deviation of SPY monthly returns.
Calculate rolling standard deviation of monthly returns of a 5-asset portfolio consisting of the following.
- AGG (a bond fund) weighted 10%
- DBC (a commodities fund) weighted 10%
- EFA (a non-US equities fund) weighted 20%
- SPY (S&P500 fund) weighted 40%
- VGT (a technology fund) weighted 20%
First though, why do we care about rolling standard deviations when in our previous Notebook we calculated ‘the’ standard deviation of monthly returns for both SPY and the portfolio? In that Notebook, what we calculated was the standard deviation of monthly returns for our entire sample, which was monthly returns for four-year period 2013-2017. What we might miss, for example, is a 3-month or 6-month period where the volatility spiked or plummeted or did both. And the longer our sample size, the more likely we are to miss something important. If we had 10 or 20 years of data and we calculated the standard deviation for the entire sample, we could fail to notice an entire year in which volatility was very high. Hence, we would fail to ponder the probability that it could occur again.
Imagine a portfolio which had a standard deviation of returns for each 6-month period of 3% and it never changed. Now, imagine a portfolio whose vol fluctuated every few 6-month periods from 0% to 6% . We might find a 3% standard deviation of monthly returns over a 10-year sample for both of these, but those two portfolios are not exhibiting the same volatility. The rolling volatility of each would show us the differences and then we could hypothesize about the past causes and future probabilities for those differences. We might also want to think about dynamically re-balancing our portfolio to better manage volatility if we are seeing large spikes in the rolling windows. We’ll look more into re balancing as this series progresses.
For now, let’s load the .RDat
file saved from our previous Notebook.
load('portfolio-returns.RDat')
We now have 3 objects in our Global Environment:
+ `spy_returns` - an `xts` object of SPY monthly returns + `portfolio_component_monthly_returns_xts` - an `xts` object of returns of the 5 funds in our portfolio + `weights` - a vector of portfolio weights
Our least difficult task is calculating the rolling standard deviation of SPY returns. We use zoo::rollapply
for this and just need to choose a number of months for the rolling window.
window <- 6 spy_rolling_sd <- na.omit(rollapply(spy_returns$SPY, window, function(x) StdDev(x)))
We now have an xts
object called spy_rolling_sd
that contains the 6-month rolling standard deviation of returns of SPY. Keep in mind that the chosen window is important and can affect the results quite a bit. Soon we’ll wrap this work to a Shiny app where changing the window and visualizing the results will be easier.
Next, we calculate the rolling volatility of our weighted portfolio. The rollapply
function doesn’t play nicely with the weights
argument that we need to supply to StdDev()
. We will craft our own version of roll apply to make this portfolio calculation, which we will use in conjunction with the map_df()
function from purrr
.
Before we do that, a slight detour from our substance. Below are two piped workflows to quickly convert from xts
to dataframe
and back to xts
. These rely heavily on the as_tibble()
and as_xts()
functions from the tidyquant.
# toggle from an xts object to a tibble portfolio_component_monthly_returns_df <- portfolio_component_monthly_returns_xts %>% as_tibble(preserve_row_names = TRUE) %>% mutate(date = ymd(row.names)) %>% select(-row.names) %>% select(date, everything()) # toggle from a tibble back to xts. returns_xts <- portfolio_component_monthly_returns_df %>% as_xts(date_col = date)
Why did we take that detour? Because we will use map_df()
, mutate()
and select()
when we apply our custom function with the %>%
operator and that will require a tibble
/data.frame
.
Before we step through the code of the custom function, let’s write out the goal and logic.
Our goal is to create a function that takes a data.frame
of asset returns and calculates the rolling standard deviation based on a starting date index and a window, for a portfolio with specified weights for each asset. We will need to supply four arguments to the function, accordingly.
Here’s the logic I used to construct that function (feel free to eviscerate this logic and replace it with something better).
- Assign a start date and end date based on the window argument. If we set window = 6, we’ll be calculating 6-month rolling standard deviations.
- Use
filter()
to subset the originaldata.frame
down to one window. I label the subsetted data frame asinterval_to_use
. In our example, that interval is a 6-month window of our original data frame. - Now we want to pass that
interval_to_use
object toStdDev()
, but it’s not anxts
object. We need to convert it and label itreturns_xts
. - Before we call
StdDev()
, we need weights. Create a weights object calledw
and give the value from the argument we supplied to the function. - Pass the
returns_xts
andw
toStdDev()
. - We now have an object called
results_as_xts
. What is this? It’s the standard deviation of returns of the first 6-month window of our weighted portfolio. - Convert it back to a
tibble
and return. - We now have the standard deviation of returns for the 6-month period that started on the first date, because we default to
start = 1
. If we wanted to get the standard deviation for a 6-month period that started on the second date, we could setstart = 2
, etc.
rolling_portfolio_sd <- function(returns_df, start = 1, window = 6, weights){ start_date <- returns_df$date[start] end_date <- returns_df$date[c(start + window)] interval_to_use <- returns_df %>% filter(date >= start_date & date < end_date) returns_xts <- interval_to_use %>% as_xts(date_col = date) w <- weights results_as_xts <- StdDev(returns_xts, weights = w, portfolio_method = "single") results_to_tibble <- as_tibble(t(results_as_xts[,1])) %>% mutate(date = ymd(end_date)) %>% select(date, everything()) }
We’re halfway there. We need to apply that function starting at the first date in our portfolio_component_monthly_returns_df
object, and keep applying it to successive date indexes until the date that is 6 months before the final date. Why end there? Because there is no rolling 6-month standard deviation that starts only 1, 2, 3, 4 or 5 months ago!
We will invoke map_df()
to apply our function to date 1, then save the result to a data.frame
, then apply our function to date 2, and save to that same data.frame
, and so on until we tell it stop at the index that is 6 before the last date index.
window <- 6 roll_portfolio_result <- map_df(1:(nrow(portfolio_component_monthly_returns_df) - window), rolling_portfolio_sd, returns_df = portfolio_component_monthly_returns_df, window = window, weights = weights) %>% mutate(date = ymd(date)) %>% select(date, everything()) %>% as_xts(date_col = date) %>% `colnames<-`("Rolling Port SD") head(roll_portfolio_result) ## Rolling Port SD ## 2013-08-01 0.02411152 ## 2013-09-01 0.02604424 ## 2013-10-01 0.02726744 ## 2013-11-01 0.02887195 ## 2013-12-01 0.02888576 ## 2014-01-01 0.02051912
Have a look at the rolling standard deviations. Why is the first date August of 2013? Do any of the results stand our as unusual, when compared to the SPY results? It’s hard to make the comparison until we chart them, and that’s what we do in the next post. See you then!
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.