The top 7 portfolio optimization problems

[This article was first published on Portfolio Probe » R language, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Stumbling blocks on the trek from theory to practical optimization in fund management.

Problem 1: portfolio optimization is too hard

If you are using a spreadsheet, then this is indeed a problem. Spreadsheets are dangerous when given a complex task.  Portfolio optimization qualifies as complex in this context (complex in data requirements).

If you are using a more appropriate computing environment, then it isn’t really all that hard.  There are a few issues that need to be dealt with, but taking them one at a time keeps the task from being overwhelming.

Solution

If you are using spreadsheets, my prescription is to switch to R.  When there is real money on the line, using a spreadsheet for portfolio optimization seems to me to be penny wise and dollar foolish.

If you have other problems with optimization, read the rest of this post.

Problem 2: portfolio optimizers suggest too much trading

A major frustration with optimizers is that the turnover can be excessive.

Solution

All reasonable portfolio optimizers allow:

  • turnover constraints
  • transaction costs

Use either of these to reduce the turnover to a suitable amount.

We don’t often let cars roll uncontrolled down a hill.  And we shouldn’t allow that of optimizers either.

Problem 3: expected returns are needed

First off, this isn’t strictly true.  You can find minimum variance portfolios which  need a variance matrix but not expected returns.  The success of low volatility investing is a reason to go down this route.

But assuming that you are an active investor, you need expectations in some sense.  There are a number of techniques that don’t require numerical expected returns.

Solution

target portfolio

Anyone should be able to provide an ideal target portfolio — the portfolio that you would like to hold when all constraints are ignored.  Once you have the target portfolio, then you can get a portfolio that is “close” to the target but does obey the constraints.  One of those constraints should almost surely be turnover.

In Portfolio Probe you can get close to your target portfolio without either expected returns or a variance matrix.

Probably a better solution would be to minimize the tracking error to the target portfolio.  This does require a variance matrix.

reverse optimization

The technique of reverse optimization (also called implied alpha) can be used iteratively to try to find a portfolio that looks like what you want in terms of the expected returns that are implied.  This avoids actually doing optimization, but it is labor-intensive and it depends on the constraints not spoiling the implied alphas (which is perhaps doubtful).

asset ranks

If you can order the assets in your universe in terms of expected returns, then it is feasible to produce expected returns to give to an optimizer.  Ranking assets is much easier than giving numerical estimates of returns.

A paper by Almgren and Chriss explains how to turn ranks into numerical expected returns.  The simple case just requires the use of the qnorm function in R.  That gives you relative sizes, but you will still want to scale them to match the variance matrix.

Problem 4: mean-variance optimization is restrictive

There is a myth that mean-variance optimization is only useful when returns are normally distributed.  That’s backwards.  If returns are normally distributed, then mean-variance optimization is all that can be done — all other utilities will be equivalent.  See more at “Ancient portfolio theory”.

If the return distribution of any assets in the universe are not reasonably close to symmetric, then, yes, mean-variance optimization is restrictive and should not be used.  Examples of disruptive assets are bonds and options.

However, if the universe is just stocks, then mean-variance is a pretty good approximation to the best we can do.  Skewness and kurtosis could be added to the utility to account for the non-normality of returns.  The blog post “Predictability of skewness and kurtosis in S&P constituents” indicates that skewness is probably close to impossible to predict and the predictability of kurtosis is limited.

In 1999 lower partial moments and semi-variance were popular with tech stocks because they weren’t really risky, they only went up.  It turned out that there was symmetry in the returns of tech stocks — it was just that the down-side came later.

Solution

If indeed you are in a situation — including fixed income or options — where mean-variance optimization is not appropriate, then you should probably do scenario optimization.

Problem 5: portfolio optimization inputs are noisy estimates

Portfolio optimizers are stupid enough to believe what we tell them.  The optimizer gives us a solution as if we really knew the expected returns and the variance matrix.  In fact:

  • estimates of expected returns are almost total noise
  • estimates of the variance matrix are quite noisy

“almost total noise” applies to the best fund managers — the “almost” needs to be dropped for below-average fund managers.

Factor models of variance are often input to optimizers.  These are much better than sample variance matrices for large universes.  However, using a shrinkage estimate is probably better than either.

nominative error

We have a Wharfian problem with “portfolio optimization”.  People think that we are optimizing the portfolio when we say that.  In fact we are really optimizing the trade.  For some purposes it doesn’t matter, but it does matter when we are thinking about what to do about noisy inputs.

Solution

Black-Litterman type operations

Some people think that doing something like Black-Litterman is a solution to this problem.  It isn’t.  If done intelligently, then it reduces — but does not eliminate — the noise in the expected returns.

robust optimization

The real solution to this problem goes by the name of robust optimization.  I find this term unfortunate since there are several uses of the term “robust” which can easily be confused with the meaning of getting good solutions to a trade optimization from noisy inputs.

There is a rather large selection of proposals for implementing solutions.  Most of them are quite complicated.

shrinking

There is a simple and easily implemented solution (though the exact number probably needs to be found via experimentation).

Here’s the story (assuming we have an existing portfolio):

If the inputs we give to the optimizer are exactly true, then we should accept what the optimizer says.  We should do the suggested trade — remember we are optimizing the trade.

If the inputs we give to the optimizer are complete garbage, we should do nothing.  Our trade should be zero.

The reality is that our inputs are somewhere between exactly true and complete garbage, so our trade should be somewhere between the suggested trade and no trade.  We want to shrink the trade.

It is easy to shrink the trade either by imposing a (stronger) turnover constraint or by increasing the transaction costs.  How much to do that is an issue, of course, but the principle is simple.  A guess is likely to be better than not doing it at all.

Problem 6: transaction costs are tricky

This is true.  Some of the costs are straightforward, but market impact is hard to pin down.

But there’s an even trickier bit: either the transaction costs need to be scaled to match the expected returns and variance, or the expected returns and variance need to be scaled to match the transaction costs.

The three entities all appear in the utility function, and scaling is necessary for the utility to make sense.

Solution

The coward’s way out is just to impose a turnover constraint.

The other way is to work and think hard about trading costs.  And probably to use an optimizer that allows flexible specification of costs.

Problem 7: risk and alpha factor alignment trouble

There has been talk among the portfolio optimization literati about alpha eating and factor alignment.  The whole thing sounds seriously geeky (even to a nerd like me).

The gist of it is that if there are factors used in the expected returns that are not factors in the risk model, then the optimizer will think those factors are essentially riskless and use them too much.

Solution

One of the main “solutions” to this is to add the missing factors to the risk model.  This of course assumes that there are factors in the expected returns model.

I suspect that the real problem is that factor models are the wrong technology to use as the variance matrix in optimizers.  The solution, then, is better technology.  My suggestion is to use Ledoit-Wolf estimates which shrink towards equal correlation.

Problem 8: constraints get in the way

This is the invisible problem.  It doesn’t concern people because they don’t know they have it.

Constraints are in place so that the portfolio doesn’t do anything too stupid.  But how many have checked to see that the constraints are doing as intended?

Solution

You can directly investigate the effect of your constraints.

There might be a way to look for constraints that actually help optimization.

Questions

The “the” in the title is of course huckstering nonsense — I don’t really know which problems are on top.  What other problems are in the running?

Appendix R

portfolio optimization in R

Many of the commercial portfolio optimizers have an R interface.

There are a number of more or less naive portfolio optimization implementations in R that have been contributed.  See the Empirical Finance task view for more details.

Ledoit-Wolf shrinkage

You can get a function that does Ledoit-Wolf shrinkage towards equal correlation by doing (in R):

install.packages("BurStFin", repos="http://www.burns-stat.com/R")

require(BurStFin)

The first command you only need to do once (per version of R), the second you need to do in each R session in which you wish to use the function.  It is called var.shrink.eqcor.

By default this ensures that the minimum eigenvalue is at least 0.001 times the largest eigenvalue.  This is a way of avoiding the factor alignment problem.  There is no scientific reason for that particular value of the limit — feel free to experiment and report back.

The BurStFin package also has factor.model.stat which estimates a statistical factor model.

Subscribe to the Portfolio Probe blog by Email

To leave a comment for the author, please follow the link and comment on their blog: Portfolio Probe » R language.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)