Are new SEC rules enough to prevent another Flash Crash?

[This article was first published on Revolutions, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

At 2:42PM on March 10 2010, without warning, the Dow Jones Industrial Index plunged more than 1000 points in just 5 minutes. It remains the biggest one-day decline in this stock market index in history. On an intra-day basis, anyway: by the end of the day, the market had regained 600 points of the drop.

Flashcrash

At the time, the cause of the 2010 Flash Crash (as it came to be known) was a mystery, but its effects were immediately apparent: besides spooking an already unstable market, millions of erroneous trades had to be unwound (at one point, shares in Accenture were selling for $0.01) and many investors lost millions on trades during that depite the crazy market activity were nonetheless deemed legitimate.

Today, the cause is still not entirely clear. A 2010 SEC report suggests automated high-frequency trading systems (aka program trading, algo trading) may have been a contributing factor: today, more than 50% of trades are generated not by humans, but by computer algorithms following the microscopic movements in the market and responding with sub-millisecond speeds. (Kevin Slavin notes in this must-see TED talk that some program trading firms have set up shop as close to the internet backbone as possible to further reduce the response time of these algorithms.) Others have suggested that an unusually large trade from one firm may have set off a chain reaction amongst these trading algorithms, but later academic studies have offered contrary opinions. Nonetheless, such a spontaneous market disruption is definitely something the SEC would prefer to avoid.

In response to the crash, the SEC instituted new “circuit breaker” rules: now, trading is paused on individual stocks whose prices suddenly moves 10 percent or more in a five-minute period. But it appears that the SEC never backtested the change: no-one ever applied these new rules to historical trades (say, around the time of the Flash Crash) to see whether these circuit breakers would, indeed, prevent a crash as intended. So three researchers — Casey King, Michael Kane and Richard Holowczak — used the R language to apply the new circuit breaker rules to more than 15 trillion trades (note though, that's just 2 years of intra-day data) to see what would have happened. Their conclusion, as presented in a paper at the R/Finance Conference in 2011 and reported in Barrons, found that “circuit breakers would not have addressed significant sectors of the market and would have been insufficient in stemming broad and sudden loss”.

Doing this kind of backtesting analysis isn't easy — intra-day trade data is huge. Kane et al relied on Revolution Analytics' open-source foreach library in R to divide-and-conquer the problem and distribute the computations across a grid, yet as reported in Barrons:

Analyzing three years of trading entails an enormous amount of processing. From 2008 to 2010, U.S.-listed stocks recorded over 24 billion separate trades. Applying the limit rules to every second of that trading record required 8,035 hours of computer processing across 60 processors in parallel.

When you're dealing with Big Data like this, high-performance computing allows for a more in-depth analysis. Kane and King have returned to the problem once again, this time with the processing power of an IBM Netezza iClass appliance (with about 200 processors) integrated with Revolution R Enterprise at their disposal. In a webinar on Wednesday September 28 in partnership with Revolution Analytics and IBM Netezza, they'll report on a new analysis of the effectiveness of the latest SEC circuit-breaker rules, and address the question: will they be enough to prevent a reoccurence of the 2010 Flash Crash? To learn more, register for the free webinar at the link below.

Revolution Analytics Webinars: Comparing Performance of Distributed Computing Platforms Using Applications in Backtesting FINRA's Limit Up/Down Rules

To leave a comment for the author, please follow the link and comment on their blog: Revolutions.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)