First-Player Advantage in a simple Two-Player
probability game
Juspreet Singh Sandhu
September 27, 2018
1 Introduction
We begin by introducing a simple probabilistic game between 2 players rolling
a fair coin independently (in a total order), and the analysis of the same using
a geometric series. Specifically, we compute the probability that the first player
to flip a head wins. After that, we will generalize the game to a case with
the uniform distribution on [k], where, k N. We will then conduct the same
analysis and measure (formally) the advantage the first player has over the
second.
2 The simple case
The Game: Let 2 players A, B play a game by flipping a fair coin independently
in a sequence of alternate moves. The first player to flip a head (H) wins.
Note: We could just have easily fixed tails (T) for the victory. However, as
the probability of drawing a heads and a tail is equivalent, it doesn’t affect the
analysis.
We now define some basic notations and concepts:
Pr(H) = Pr(T ) = 1/2.
Pr(A = i) = Probability that player A draws i {H, T }.
Pr(A = i|C) = Probability player A draws i {H, T } conditioned on C.
Pr(E) = Probability that an event E will not happen.
Assume, WLOG, that player A goes first. Then, at move 1, Pr(A wins) = 1/2.
Now, at move 2, the only way player B can win is if player A did not win
at move 1 nad player B draws a H. So, Pr(B wins) = Pr((B = H)
¯
A) =
Pr(B|
¯
A)Pr(
¯
A) = 1/2·1/2 = 1/4.
1
So, we can now make the following simple observation (assuming player A goes
first):
Player A can only win on moves i = 2k + 1, st, k Z
Player B can only win on moves j = 2k, st, k Z
Generalizing our observation and using conditional dependence:
Pr(A wins) = Pr((A = H)
¯
B) =
n
i=1
Pr((A = H)
n1
j=0
B
2j+1
).
We can expand that further as:
n
i=1
Pr((A = H)
n1
j=0
B
2j+1
) =
n
i=1
Pr(A)·1/2
2i
=
n
i=1
1/2
2i+1
Now, to see the limit at which this converges, we can take the limit as n :
lim
n→∞
n
i=1
1/2
2i+1
= 1/2 ·
i=1
1/4
i
= 1/2 · 4/3 = 2/3.
Therefore, Pr(A wins) = 2/3.
As Pr(A wins) + Pr(B wins) = 1, we see that Pr(B wins) = 1/3.
A similar analysis to the same problem using the tree-diagram approach can
be found in the text Mathematics for Computer Science, Leighton & Meyer in
Chapter 14, Section 5.
3 The general case
We now move on the more general case for this game, which involves the two
players A and B drawing from a sample space S = [k], for some k Z. The
distribution on [k] is uniform, so Pr(A = i) = 1/k, for some i [k].
The game is still the same (in that the particular j chosen to be the winning
draw doesn’t matter. So, we say that a player wins if they draw a number j,
for some fixed j [k].
Let us, again, assume that player A goes first (WLOG). Then, at move 1, Pr(A
wins) = 1/k. So, at move 2, the only way player B can win is if player A does
not win at move 1 and player B draws j. So, Pr(B wins) = Pr((B = H) A)
= (k 1) · 1/k
2
.
Our previous observation that player A can only win on odd moves and player
B can only win on even moves still holds. Further, we observe that player A/B
has probability of winning at level m equivalent to 1/k
m
·number of j draws.
It’s easy to see that this can be counted by some elementary enumerative com-
binatorics - the number of winning draws at level m are:
A
wins
m
= 1 + (k 1)
2
+ (k 1)
4
+ ......
B
wins
m
= (k 1) + (k 1)
3
+ (k 1)
5
+ ......
The probability of players A and B winning at level m:
Pr(A wins) = Pr((A = H)
¯
B) =
n
i=1
Pr((A = H)
n1
j=0
B
2j+1
).
We can expand that further as:
2
n
i=1
Pr((A = H)
n1
j=0
B
2j+1
) =
n
i=1
Pr(A)·(k1)
2i
= 1/k+
n
i=1
1/k
2i
·(k1)
2i
Now, to see the limit at which this converges, we can take the limit as n :
lim
n→∞
(1/k +
n
i=1
1/k
2i
· (k 1)
2i
) = lim
n→∞
1/k ·
n
i=0
(
k1
k
)
2i
Therefore, the probability of player A is:
Pr(A wins) = 1/k ·
i=0
(
k1
k
)
2i
As
k1
k
< 1, k Z
+
, we have that (
k1
k
)
2
< 1. Thus, the series will con-
verge.
Let α =
k1
k
.
Then, Pr(A wins) = 1/k ·
i=1
(α)
2i
= 1/k ·
i=1
(α
2
)
i
= 1/k ·
1
1α
2
Now, α
2
= (
k1
k
)
2
=
k
2
2k+1
k
2
We then substitute this value of α
2
to obtain our convergent limit:
Pr(A wins) = 1/k ·
1
1α
2
=
k
2k1
The probability that player B wins, then is:
Pr(B wins) = 1 - Pr(A wins) =
k1
2k1
It is critical to note then, that even as k , the difference between the prob-
ability of A winning and B winning tends to 0, but is still greater (marginally)
for A for any finite value of k. This can be defined to be the ”advantage” that
the first player has over the second in the game:
Def
n
: (Advantage) The advantage a player A
i
has over a player A
j
is :
Adv(A
i
, A
j
) = Pr(A wins) - Pr(B wins).
Note: This definition is slightly different from the Statistical Definition of Ad-
vantage used in Cryptography where we use Advantage to measure the similarity
of a distribution to a uniform distribution (indirectly measuring how hard it is
for an adversary ”guess” the distribution).
In our 2-player game over D [k], the advantage the first player has over the
second is:
Pr(A wins) - Pr(B wins) =
1
2k1
, which means the first player always has an
advantage (though, it gets smaller as k increases).
Now, while lim
k→∞
1
2k1
0, it is critical to note our distribution is not de-
fined for k .
Even worse, there’s absolutely nothing we can do to define such a distribution
on a countably infinite set (if we want it to be uniform):
Lemma: For any countably infinite set S, D S, st, D is uniform.
Proof: We can use a simple proof by contradiction to show this -
Assume D, st,
i=1
D(i) = 1.
As D is uniform, D(i) = a, i N, where a > 0.
So,
i=1
D(i) =
i=1
a as a > 0.
This is a contradiction. This means that D, st, D is uniform on S.
QED
3