# That’s Not How the “Law of Large Numbers” Works

March 12, 2012
By

(This article was first published on Confounded by Confounding » R, and kindly contributed to R-bloggers)

Breaking my dissertation and administrata induced silence for a small rant combining two of my favorite things – Apple Computer Inc, and simulation. Recently, the New York Times featured the article ‘Apple Confronts the Law of Large Numbers‘. The fundamental assertion? That the earnings growth and stock price of Apple cannot continue its rapid rise. The justification? The Law of Large Numbers, and the idea that as Apple grows larger, each additional % increase in earnings, profit, etc. represents a bigger and bigger step in terms of the absolute dollar amount.

One problem: That’s not how the Law of Large Numbers works. More after the jump.

First, a definition of the Law of Large Numbers. From the article itself:

the law states that a variable will revert to a mean over a large sample of results.

From Wolfram Alpha:

A “law of large numbers” is one of several theorems expressing the idea that as the number of trials of a random process increases, the percentage difference between the expected and actual values goes to zero.

So both definitions are in agreement. So it’s not the definition of the Law that the article has gotten wrong, just its application. The most useful way (for me at least) to see the Law in action is to visualize it. Lets use a very simple simulation of a fair coin flip. We know that the average of a fair coin should be 0.50. Using some simple R code, we can simulate flipping a coin 10,000 times:

set.seed(807060)
x <- sample(0:1, 10000, repl=T)
s <- cumsum(x); r <- s/(1:n)
plot(r, ylim=c(.01, .60), type=”l”)
lines(c(0,n), c(.50,.50),col=”red”)
round(cbind(x,s,r), 5)[1:10,]; r[n]

This yields a plot that looks like this:

The black line is the cumulative average, with the red line being the known mean the simulation should converge to over time. Notice how after 1, 2, 10 or even 1000 coin tosses, the average isn’t 0.5. After 10,000, it’s considerably closer, but still not necessarily there. After many, many more? That is how the Law of Large Numbers works, and its the underpinning behind most simulation work – that with enough simulations, you should converge on the actual expected value, even if it isn’t known – or (unlike our coin toss example) its hard or impossible to arrive at analytically.

But this has nothing to do with Apple’s performance, past, present, or future. The company’s earnings, share price, or profits aren’t a random process. That it becomes progressively harder to deliver the same % increase as a company grows larger isn’t wrong, but it isn’t the Law of Large Numbers. Just because 500 billion is a large number (and in dollar terms, it is a very large number) doesn’t mean that’s what operating. There are many, many reasons to wonder if Apple’s trajectory can continue. Has it lost its small, lean operating patterns now that it’s the largest company in the U.S.? Can it continue without Steve Job’s driving personality? Can it keep its track record going? After all, eggs aren’t the only thing that comes out of a golden goose.

But it’s not that random processes converge toward the expectation after a progressively larger number of trials. I’d also like to note that, years ago, when Mr. Dell was calling for Apple to be scrapped and the proceeds given to shareholders, and it was trading for very low double digits instead of middling-high triple digits, noone was going “You know, the Law of Large Numbers will eventually drag Apple up. It’s a sure thing!”

Disclosure: I own stock in Apple. And am rather fond of simulation.

Filed under: General, R, Simulation, Soapbox

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...