Leverage Up When You’re Down?

[This article was first published on R – QuantStrat TradeR, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

This post will investigate the idea of reducing leverage when drawdowns are small, and increasing leverage as losses accumulate. It’s based on the idea that whatever goes up must come down, and whatever comes down generally goes back up.

I originally came across this idea from this blog post.

So, first off, let’s write an easy function that allows replication of this idea. Essentially, we have several arguments:

One: the default leverage (that is, when your drawdown is zero, what’s your exposure)? For reference, in the original post, it’s 10%.

Next: the various leverage levels. In the original post, the leverage levels are 25%, 50%, and 100%.

And lastly, we need the corresponding thresholds at which to apply those leverage levels. In the original post, those levels are 20%, 40%, and 55%.

So, now we can create a function to implement that in R. The idea being that we have R compute the drawdowns, and then use that information to determine leverage levels as precisely and frequently as possible.

Here’s a quick piece of code to do so:

require(xts)
require(PerformanceAnalytics)

drawdownLev <- function(rets, defaultLev = .1, levs = c(.25, .5, 1), ddthresh = c(-.2, -.4, -.55)) {
  # compute drawdowns
  dds <- PerformanceAnalytics:::Drawdowns(rets)
  
  # initialize leverage to the default level
  dds$lev <- defaultLev
  
  # change the leverage for every threshold
  for(i in 1:length(ddthresh)) {
    
    # as drawdowns go through thresholds, adjust leverage
    dds$lev[dds$Close < ddthresh[i]] <- levs[i]
  }
  
  # compute the new strategy returns -- apply leverage at tomorrow's close
  out <- rets * lag(dds$lev, 2)
  
  # return the leverage and the new returns
  leverage <- dds$lev
  colnames(leverage) <- c("DDLev_leverage")
  return(list(leverage, out))
}

So, let’s replicate some results.

require(downloader)
require(xts)
require(PerformanceAnalytics)


download("https://dl.dropboxusercontent.com/s/jk6der1s5lxtcfy/XIVlong.TXT",
         destfile="longXIV.txt")


xiv <- xts(read.zoo("longXIV.txt", format="%Y-%m-%d", sep=",", header=TRUE))
xivRets <- Return.calculate(Cl(xiv))

xivDDlev <- drawdownLev(xivRets, defaultLev = .1, levs = c(.25, .5, 1), ddthresh = c(-.2, -.4, -.55))
compare <- na.omit(cbind(xivDDlev[[2]], xivRets))
colnames(compare) <- c("XIV_DD_leverage", "XIV")

charts.PerformanceSummary(compare['2011::2016'])

And our results look something like this:

xivddlev

                          XIV_DD_leverage       XIV
Annualized Return               0.2828000 0.2556000
Annualized Std Dev              0.3191000 0.6498000
Annualized Sharpe (Rf=0%)       0.8862000 0.3934000
Worst Drawdown                  0.4870604 0.7438706
Calmar Ratio                    0.5805443 0.3436668

That said, what would happen if one were to extend the data for all available XIV data?

xivddlev2

> rbind(table.AnnualizedReturns(compare), maxDrawdown(compare), CalmarRatio(compare))
                          XIV_DD_leverage       XIV
Annualized Return               0.1615000 0.3319000
Annualized Std Dev              0.3691000 0.5796000
Annualized Sharpe (Rf=0%)       0.4375000 0.5727000
Worst Drawdown                  0.8293650 0.9215784
Calmar Ratio                    0.1947428 0.3601385

A different story.

In this case, I think the takeaway is that such a mechanism does well when the drawdowns for the benchmark in question occur sharply, so that the lower exposure protects from those sharp drawdowns, and then the benchmark spends much of the time in a recovery mode, so that the increased exposure has time to earn outsized returns, and then draws down again. When the benchmark continues to see drawdowns after maximum leverage is reached, or continues to perform well when not in drawdown, such a mechanism falls behind quickly.

As always, there is no free lunch when it comes to drawdowns, as trying to lower exposure in preparation for a correction will necessarily mean forfeiting a painful amount of upside in the good times, at least as presented in the original post.

Thanks for reading.

NOTE: I am currently looking for my next full-time opportunity, preferably in New York City or Philadelphia relating to the skills I have demonstrated on this blog. My LinkedIn profile can be found here. If you know of such opportunities, do not hesitate to reach out to me.


To leave a comment for the author, please follow the link and comment on their blog: R – QuantStrat TradeR.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)