Volatility Risk Premium: Sharpe 2+, Return to Drawdown 3+

[This article was first published on QuantStrat TradeR » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

First, before starting this post, I’d like to give one last comment about my previous post:

I called Vanguard to inquire about the trading policies on VWEHX and VFISX, and there are two-month cooldown periods (aka frequent-trading policies) on those mutual funds. However, the HYG ETF does indeed pay dividends, so the adjusted ETF variant is most likely the closest performance an investor can expect. Still, a Sharpe ratio higher than 1.25 is nothing to scoff at. Of course, no transaction costs are assumed on any of my strategies, so make sure your broker isn’t ripping you off if you actually intend on seriously investing in anything I publish on this blog (I hear interactive brokers has $1 per transaction), and once again, remember that none of this constitutes official advice.

Now, onto this post:

Judging by the attention some of my previous volatility posts have garnered through my replication of SeekingAlpha strategies, today, I am going to share a strategy whose statistics boggle my mind.

The strategy was presented by TradingTheOdds in this post. I had to replicate it for myself to be sure it worked as advertised, but unless I have something horrendously incorrect, this strategy works…quite well. Here’s the strategy:

Using the actual S&P 500 index, compute the two-day annualized historical volatility. Subtract that from the VXMT, which is the six-month expected volatility of the S&P 500 (prior to 2008, use the actual VIX). Then, take the 5-day SMA of that difference. If this number is above 0, go long XIV, otherwise go long VXX. In my replication, the strategy uses market-on-close orders (AKA observe “near” the close, buy at the close), so the strategy should be taken with a little bit of a grain of salt. Here’s the code:

download("https://dl.dropboxusercontent.com/s/jk6der1s5lxtcfy/XIVlong.TXT",
         destfile="longXIV.txt")

download("https://dl.dropboxusercontent.com/s/950x55x7jtm9x2q/VXXlong.TXT", 
         destfile="longVXX.txt") #requires downloader package

xiv <- xts(read.zoo("longXIV.txt", format="%Y-%m-%d", sep=",", header=TRUE))
vxx <- xts(read.zoo("longVXX.txt", format="%Y-%m-%d", sep=",", header=TRUE))
vxmt <- xts(read.zoo("vxmtdailyprices.csv", format="%m/%d/%Y", sep=",", header=TRUE))

getSymbols("^VIX", from="2004-03-29")

vixvxmt <- merge(Cl(VIX), Cl(vxmt))
vixvxmt[is.na(vixvxmt[,2]),2] <- vixvxmt[is.na(vixvxmt[,2]),1]

getSymbols("^GSPC", from="1990-01-01")
spyRets <- diff(log(Cl(GSPC)))

spyVol <- runSD(spyRets, n=2)
annSpyVol <- spyVol*100*sqrt(252)

vols <- merge(vixvxmt[,2], annSpyVol, join='inner')
vols$smaDiff <- SMA(vols[,1] - vols[,2], n=5)
vols$signal <- vols$smaDiff > 0
vols$signal <- lag(vols$signal, k = 1)

xivRets <- Return.calculate(Cl(xiv))
vxxRets <- Return.calculate(Cl(vxx))
stratRets <- vols$signal*xivRets + (1-vols$signal)*vxxRets

The VXMT data is taken from the link I showed earlier. So, as I interpret it, to me, this strategy seems to be stating this:

Since we are subtracting the long-term expected volatility (VXMT) from the near-term historical volatility, which I suppose is meant to be a proxy for the “forecast of current volatility”, the implied hypothesis seems to be that volatility is being overestimated by the VXMT, so we should go short volatility. Conversely, if the near-term historical volatility is higher than the expected volatility, it means we should be long volatility instead. So, here’s the punchline that is the equity curve:

charts.PerformanceSummary(stratRets)

Yes, you’re looking at that correctly–over approximately ten years (slightly longer), this strategy had a cumulative return of about $10,000 for every $1 invested. Just to put this into perspective, here’s a log-scale equity curve.

There are a few noticeable dips which correspond to around 40% drawdowns on the regular scale. Now, let’s look at the usual statistics:

stats <- data.frame(cbind(Return.annualized(stratRets)*100, 
               maxDrawdown(stratRets)*100, 
               SharpeRatio.annualized(stratRets)))

colnames(stats) <- c("Annualized Return", "Max Drawdown", "Annualized Sharpe")
stats$MAR <- as.numeric(stats[1])/as.numeric(stats[2])

With the following result:

                  Annualized Return Max Drawdown Annualized Sharpe      MAR
Annualized Return          137.7875     45.64011          2.491509 3.019001

Risky, as judging from maximum drawdown alone? Yes. But is there risk for the reward? Absolutely.

To put a lower bound on the performance of the strategy, here are the same diagrams and statistics with the signal lagged one more day (that is, since the returns use close-to-close data and work off of the closing prices, the signal is lagged by a day to avoid lookahead bias. Lagging the signal by one more day would mean receiving the signal at the close of day t, but only entering on the close of day t+1).

In effect, this is the coding change:

vols$signal <- lag(vols$signal, k = 1)

Becomes

vols$signal <- lag(vols$signal, k = 2)

Here are the results:

Equity curve/drawdowns:

So, still impressive on the returns, but now there are some very pronounced drawdowns–slightly larger, but more concerning, that they’re longer.

Log equity curve:

Again, slightly more pronounced dips.

Here are the statistics:

                  Annualized Return Max Drawdown Annualized Sharpe      MAR
Annualized Return          84.36546     56.77219          1.521165 1.486035

So still quite respectable for a strategy this understandable.

I’m fairly certain that there’s still room for improvement on this strategy, considering that anywhere there’s a 5-day SMA, there’s often room for better trend-following indicators, and possibly a better indication for the volatility of SPY than the 2-day rolling annualized metric. But as it stands, I think based on its risk/reward characteristics, this strategy has a place as an aggressive, returns-generating component of a portfolio of strategies.

Thanks for reading.

Note: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.


To leave a comment for the author, please follow the link and comment on their blog: QuantStrat TradeR » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)