[This article was first published on R – Statistical Odds & Ends, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Happy Pi Day! I don’t encounter $\pi$ very much in my area of statistics, so this post might seem a little forced… In this post, I’m going to show one way to estimate $\pi$.

The starting point is the integral identity

\begin{aligned} 4 \int_0^1 \sqrt{1 - x^2} dx = \pi. \end{aligned}

There are two ways to see why this identity is true. The first is that the integral is simply computing the area of a quarter-circle with unit radius. The second is by explicitly evaluating the integral:

\begin{aligned} \int_0^1 \sqrt{1 - x^2}dx &= \left[ \frac{1}{2} \left( x \sqrt{1 - x^2} + \arcsin x \right) \right]_0^1 \\ &= \frac{1}{2} \left[ \frac{\pi}{2} - 0 \right] \\ &= \frac{\pi}{4}. \end{aligned}

If $X \sim \text{Unif}[0,1]$, then the integral identity means that

\begin{aligned} \mathbb{E} \left[ \sqrt{1 - X^2} \right] = \pi. \end{aligned}

Hence, if we take i.i.d. draws $X_1, \dots, X_n$ from the uniform distribution on $[0,1]$, it is reasonable to expect that

\begin{aligned} \frac{1}{n}\sum_{i=1}^n \sqrt{1 - X_i^2} \approx \pi. \end{aligned}

The code below shows how well this estimation procedure does for one run as the sample size goes from 1 to 200:

set.seed(1)
N <- 200
x <- runif(N)
samples <- 4 * sqrt(1 - x^2)
estimates <- cumsum(samples) / 1:N
plot(1:N, estimates, type = "l",
xlab = "Sample size", ylab = "Estimate of pi",
main = "Estimates of pi vs. sample size")
abline(h = pi, col ="red", lty = 2)


The next plot shows the relative error on the y-axis instead (the red dotted line represents 1% relative error):

rel_error <- abs(estimates - pi) / pi * 100
plot(1:N, rel_error, type = "l",
xlab = "Sample size", ylab = "Relative error (%)",
main = "Relative error vs. sample size")
abline(h = 0)
abline(h = 1, col = "red", lty = 2)