Psycho dice and Monte Carlo

December 16, 2011
By

(This article was first published on Statisfaction » R, and kindly contributed to R-bloggers)

Following Pierre’s post on psycho dice, I want here to see by which average margin repeated plays might be called influenced by mind will. The rules are the following (exerpt from the novel Midnight in the Garden of Good and Evil, by John Berendt):

You take four dice and call out four numbers between one and six–for example, a four, a three, and two sixes. Then you throw the dice, and if any of your numbers come up, you leave those dice standing on the board. You continue to roll the remaining dice until all the dice are sitting on the board, showing your set of numbers. You’re eliminated if you roll three times in succession without getting any of the numbers you need. The object is to get all four numbers in the fewest rolls.

Simplify the game by forgetting the elimination step. Suppose first one plays with an even dice of 1/p faces. The probability of it to show the right face is p (for somebody with no psy power). Denote X the time to first success with one dice, which follows, by independence, a geometric distribution Geom(p) (with the starting-to-1 convention). X has the following probability mass and cumulative distribution functions, with q=1-p:

f_X(k)=pq^{k-1},\quad F_X(k)=1-q^k.

Now denote Y the time to success in the game with n dice. This simultaneous case is the same as playing n times independently with 1 dice, and then taking Y as the sample maximum of the different times to success. So Y‘s cdf is

F_Y(k)=F_X(k)^n=(1-q^k)^n.

Its pmf can be obtained either exactly by difference, or up to a normalizing constant C by differentiation:

f_Y(k)=Cq^k(1-q^k)^{n-1}.

As it is not too far from the Geom(p) pmf, one can use the latter as the proposal in a Monte Carlo estimate. If X_i‘s are N independent Geom(p) variables, then

E(Y) \approx \frac{\sum_i X_i(1-q^{X_i})^{n-1}}{\sum_i (1-q^{X_i})^{n-1}} and E(Y^2) \approx \frac{\sum_i X_i^2(1-q^{X_i})^{n-1}}{\sum_i (1-q^{X_i})^{n-1}}.

The following R lines produce the estimates \mu_Y=E(Y) = 11.4 and \sigma_Y=sd(Y) = 6.5.

p=1/6
q=1-p
n=4
rgeom1=function(n,p){rgeom(n,p)+1}
h=function(x){(1-q^x)^(n-1)}

N=10^6
X=rgeom1(N,p)
(C=1/mean(h(X)))
(m1_Y=C*mean(X*h(X)))
(m2_Y=C*mean(X^2*h(X)))
(sd_Y=sqrt(m2_Y-m1_Y^2))

Created by Pretty R at inside-R.org

Now it is possible to use a test (from classical test theory) to estimate the average margin with which repeated games should deviate in order to detect statistical evidence of psy power. We are interested in testing H_0\,:\,E(Y)=\mu_Y against H_1\,:\,E(Y)<\mu_Y, for repeated plays.

If the game is played k times, then one rejects H_0 if the sampled mean \bar{Y} is less than \mu_Y -\frac{\sigma_Y}{\sqrt{k}}q_{.95}, where q_{.95} is the 95% standard normal quantile. To indicate the presence of a psy power, someone playing k=20 times should perform in 2 rolls less than the predicted value \mu_Y= 11.4 (in 1 roll less if playing k=80 times). I can’t wait, I’m going to grab a dice!


To leave a comment for the author, please follow the link and comment on his blog: Statisfaction » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...



If you got this far, why not subscribe for updates from the site? Choose your flavor: e-mail, twitter, RSS, or facebook...

Comments are closed.