Did an Excel error bring down the London Whale?

[This article was first published on Revolutions, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

When JP Morgan Chase announced it had lost more than 2 billion dollars on the capital markets back in May 2012, many pointed to the actions of rogue trader Bruno Iksil as the cause. But was the “London Whale” — the nickname he was given by other traders for his outsized positions — the victim not of hubris, but a simple spreadsheet error?

James Kwak, associate professor at the University of Connecticut School of Law and co-founder of the Baseline Scenario blog, noted some interesting facts in JP Morgan Chase's post-mortem investigation of the losses. Specifically, that the Value at Risk (VaR) model that underpinned the hedging strategy

“operated through a series of Excel spreadsheets, which had to be completed manually, by a process of copying and pasting data from one spreadsheet to another”, and “that it should be automated” but never was.

This is a surprisingly common practice: through accretion and incremental advancements, an important statistical calculation somehow ends up being implemented as a convoluted series of Excel worksheets, connected by hundreds (or even thousands) of cell-reference formulas, all driven by a series of input parameters that need to be entered manually. Not only does this impose the risk of introducing errors when cutting-and-pasting the inputs, it also makes the workbook extremely fragile. As anyone who's build a budget in Excel knows, it's very easy when editing the spreadsheet to find that formulas no longer extend to their expected ranges (ever missed the bottom row from a formula when adding new data?), or point to the wrong data entirely. And then there's the possibility of errors in the formulas themselves, which seemed to have been an issue here as well:

“After subtracting the old rate from the new rate, the spreadsheet divided by their sum instead of their average, as the modeler had intended. This error likely had the effect of muting volatility by a factor of two and of lowering the VaR . . .”

Excel is an excellent tool for many applications, but the intertwined cross-references of formulas make errors like this hard to detect, and hard to correct even if discovered. That's why a programming language designed for data analysis such as the R language is a better platform for building the computational engines behind VaR models and other financial systems. Not only can it automate the process of importing data and inputs from other systems (and thus eliminate cut-and-paste errors), it also provides a structured, maintainable environment for the computational logic, within a framework that promotes code review and unit testing to detect errors. Excel may still be the preferred vehicle for delivering the results, but use R to generate the analytics computations (or embed real-time R computations in Excel) instead of risking an implementation in Excel formulas.

Read also: How Validus Re uses Revolution R Enteprise for risk management

The Baseline Scenario: The Importance of Excel (via Ben Lorica)

To leave a comment for the author, please follow the link and comment on their blog: Revolutions.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)