# In three months, I’ll be in Vegas (trying to win against the house)

April 20, 2013
By

(This article was first published on Freakonometrics » R-english, and kindly contributed to R-bloggers)

In fact, I’m going there with my family and some friends, including two probabilists (I mean professionals, I am merely an amateur), with this incredible challenge: will I be able to convince  probabilists to go to play at the Casino?

Actually, I also want to study them carefully, to understand how we should play optimally. For example, I hope I can make them play the roulette. Roulette is simple. With a French (or European) roulette, it is probably the simplest: if I bet on black, I win if one of 18 black numbers is out, and I lose if one of the 18 red numbers – or zero (which is green) – is out. This gives a winning probability of 18/37 i.e. a 48.64% chance. But in Vegas, I think it’s mostly American Roulette that can be found in casinos, in which there is a zero and a double zero (both favorable to the bank). Here, the  probability of winning is 18/38, i.e. 47.36% chance. The two roulettes are

Now, let us discuss a little bit about optimal strategy. For instance, suppose I go to Las Vegas with an initial wealth $s$ (say $100). The goal is to find the strategy which maximizes the probability to leave Las Vegas with $2s$ (here$200). Should I play big, or small ?

Assume that I can bet $x$ (that will be, here, for convenience, a fraction of $s$). With probability $p$, I will get $2x$, and with probability $1-p$, I will get $0$ (and lose my $x$). As mentioned above, $p$ is (a little) smaller than 50%. The casino must win (actually, we will see that this assumption has a very strong impact on the strategy).

Suppose my goal is to double my initial sum, as mentioned in the introduction of this post. Maybe there is an optimum value for $x$, to maximize the probability of doubling my bet. To make it simple, the game ends either because I did not, or because I did, manage to double my initial wealth… Assume further that $x$ is fixed, and that I do not revise my bets. One can use monte carlo simulations, to get an intuitive idea…

> bet=function(s=1,t=2*s,x=s/4,p=.4736,nsim=100000){
+     vp=rep(0,nsim); #vw=s
+     for(i in 1:nsim){
+       w=s;
+       while((w>0)&(w<t)){
+          ux=sample(c(min(x,t-w),-x),size=1,prob=c(p,1-p))
+          w=w+ux
+       }
+       vp[i]=(w>=t)}
+     return(mean(vp))
+ }

If we plot this probability as a function of $x/s$, we have the following

> BET=function(x) bet(x=x)
> vx=1/(1:20)
> px= Vectorize(BET)(vx)
> plot(vx,px,log="x")


Let us see if we can do the maths, and actually compute those probabilities.

For example, if $x = s$, I play everything I have, and I double with probability $p$. That one was simple.  And indeed, on the graph above, the point on the right is probability  $p$ (the red horizontal line).

Assume now that I can bet $x = s / 2$, and I will play, at least, two rounds

• with probability $(1-p) ^ 2$ I will lose both rounds (and the game is over)
• with probability $p ^ 2$, I will win both rounds, and I double my bet (and the game is also over)
• with probability $2p (1-p)$, I will lose once, and double once. Anyway, I will find myself again with my (initial) wealth $s$. So the game will start again….

To make the story short the probability of doubling my earnings is

$p ^ 2+ 2 p (1-p)\big( p^ 2 + 2 p (1-p)\big( \cdots$

which is

$p ^ 2 \left (1 +2 p (1-p) + [2p (1-p)] ^ 2 + \cdots \right) = \frac {p ^ 2} {1-2p (1-p)}$

Let’s try something more general: I have initial wealth $s$, I can bet $x$ and the goal is to reach $2s$ (or, more generally, say, $t$). Now, the probability to reach $t$ from $s$ betting (always) $x$ is exactly the same as the probability to reach $t/x$ from $s/x$ betting only 1. Let $P_b(a)$ denote the probability to go from $a$ to $b$ betting 1 (let us use generic parameters). We can easily get the following equation

$P_b(a) = p\cdot P_b(a+1) + (1-p) \cdot P_b(a-1)$

Thus, we can write

$p\cdot (P_b(a+1)-P_b(a)) = (1-p)\cdot (P_b(a)-P_b(a-1))$

or equivalently

$(P_b(a+1)-P_b(a)) =\frac{1-p}{p}\cdot (P_b(a)-P_b(a-1))$

$\left(\frac{1-p}{p}\right)^a\cdot (P_b(1)-P_b(0))$

Now, observe that $P_b(0)=0$ (since I cannot have a gain without any money).

Let us write $P_b(a+1)-P_b(0)$ using a domino technique :

$[P_b(a+1)-P_b(a)]+[P_b(a)-P_b(a-1)]+\cdots+[P_b(1)-P_b(0)]$

i.e.

$\left(\frac{1-p}{p}\right)^a P_b(1)+\left(\frac{1-p}{p}\right)^{a-1} P_b(1)+\cdots+ \left(\frac{1-p}{p}\right)^0 P_b(1)$

so this geometric sum can also be written

$\left(1 -\left[\frac{1-p}{p}\right]^{a+1} \right) \left(1 -\left[\frac{1-p}{p}\right] \right)^{-1}$

Finally, we can write

$P_b(a)=\left(1 -\left[\frac{1-p}{p}\right]^{a} \right)\left(1 -\left[\frac{1-p}{p}\right] \right)^{-1}\cdot P_b(1)$

Here, there is still $P_b(1)$ that I have to explicit. The idea is to observe that $P_b(b)=1$, thus

$P_b(a)=\left(1 -\left[\frac{1-p}{p}\right]^{a} \right)\left(1 -\left[\frac{1-p}{p}\right]^{b} \right)^{-1}$

So finally,

$\mathbb{P}(gain)=\left(1 -\left[\frac{1-p}{p}\right]^{s/x} \right)\left(1 -\left[\frac{1-p}{p}\right]^{2s/x} \right)^{-1}$

Nice isn’t it? But to be honest, there is nothing new here. This is actually an old theorem discovered by Christiaan Huygens in 1657, then extended by Jacob Bernoulli in 1680 and finally properly established by Abraham de Moivre in 1711. It is possible to plot this graph, as a function of $x/s$,

> bet2=function(s=1,t=2*s,x=s/4,p=.4736){
+     vp=(1-((1-p)/p)^(s/x))/(1-((1-p)/p)^(t/x))
+     return(vp)
+ }

The graph is the same as the one with monte carlo simulation (hopefully). Observe, looking carefully at the function above, that the probability is decreasing with $p$. Which makes sense… Further, the probability is decreasing with $t$: the more hungry, the less chance of winning I have.

Now, the interesting part is what is plotted on the graphs above: the smaller $x$ (the size of the bets at each round), the less chances to win: if I want to win, it is important not to play being little player ! I must bet everything I have ! Actually, the funny thing is that if the probability of winning was (slightly) larger than 1/2, on the contrary, I should bet as small as possible

So far, there is nothing new. Everything mentioned in this post can be related to a fundamental result of Lester Dubins and Leonard Savage, in “How to Gamble if You Must : Inequalities for Stochastic Processes“ (published in 1965), see also Sudderth (1972). Of course, I can try another strategy, a little less reasonable, I think, which is sometimes called Martingale of D’Alembert. I believe more in luck than coincidence, so, when I win, I drop my bet (do not tempt fate) but when I lose, I increase my bet (I must win someday). But let’s keep it for another post, someday…

Again, that’s a theory. I guess we should try, and see how it works. I’ll try to upload pictures on the blog during the road trip, so if by the beginning in August nothing has been posted on the blog, please send a rescue team to save me at the Bellagio…

### Arthur Charpentier

Arthur Charpentier, professor in Montréal, in Actuarial Science. Former professor-assistant at ENSAE Paristech, associate professor at Ecole Polytechnique and assistant professor in Economics at Université de Rennes 1.  Graduated from ENSAE, Master in Mathematical Economics (Paris Dauphine), PhD in Mathematics (KU Leuven), and Fellow of the French Institute of Actuaries.