Sunday, January 3, 2016

Kelly Criterion

Suppose someone offers you the following deal: you put down a certain amount of money. A fair coin is tossed.  Heads - you triple your money; tails - you lose the money wagered.

Loss Aversion

Q1: Do you take the deal?
A1: You probably should; the expected payout is 0.5 * $2 + 0.5 * (-$1) = $1 > 0.

Q2: Suppose you  have $1 million dollars of life savings. Do you go all in?
A2: You probably shouldn't. There is a 50% chance your life-savings will be wiped out! The downside doesn't justify the upside.
Payout based on fraction wagered assuming starting amount is $1, for a single bet
Now say the terms of the deal are extended.

Q3: The deal will be offered, say, once every month for the rest of your life. What fraction of your savings "f" would you wager?

If \(f \approx 0\), you don't play the game or bet small. You are probably not making the most of the opportunity offered. If \(f \approx 1\), you go all in. A single loss will wipe you off entirely.

The intutive answer seems to be somewhere between 0  and 1.

Further let us suppose that this "fraction" is a constant over all the (monthly) bets.

Kelly Criterion

The Kelly criterion informs the size of the bet, given the payouts and probabilities of possible outcomes. From the wikipedia article, if

  • \(f^*\) is the fraction to bet;
  • \(b\) is the net earnings per dollar wagered, if you win;
  • \(q\) is the probability of winning;
  • \(1-q\) is the probability of losing,
then the optimal fraction to bet is \[f^{*} = \frac{bq - (1-q)}{b} = \frac{q(b + 1) - 1}{b}.\] This fraction may be thought of as: \[f^{*} = \frac{\text{expected payout}}{\text{payout if you win}}\]

We can derive this formula later. Here, it is easy to see that, f* = (2*0.5 - 0.5)/(2) = 0.25.

Let's simulate a series of such bets. Suppose we start with an amount \(a_0\). After \(n\) bets, we are left with \(a_n = a_0 R_1 R_2 ... R_n\), or \[a_n = a_0 \prod_{i=1}^{n} R_i,\] where \(R_i\) is amount we are left with, for each available dollar, after bet \(i\).

Starting with $1, bet f*=0.25 of bankroll every bet for 100 bets.
The best one can do is to win all bets, leading to a payout of \[a_{n} = a_0 (1 + bf)^n.\]

The worst one can do is to lose all bets, leading to a payout of \[a_{n} = a_0 * (1-f)^n.\] Notice that in theory you never go bankrupt if \(f < 1\), since you are never wagering all your assets. However, in practice, there is usually a limit, since fractions of a penny may be unacceptable wagers.

Simulations

One series is anecdote. Let's collect some data, by repeating the experiment 10 times at different values of f = 0.25, 0.05 (smaller than optimal), and 0.6 (larger than optimal).
f=0.25. Black and blue lines are arithmetic and geometric means, respectively.
You notice that the arithmetic mean (the thick black line) is overly optimistic, and is heavily influenced by the lucky few outliers. The thick blue line is geometric average which seems more "reasonable". Geometric means weigh positive outliers less strongly.

To see this important concept in a more salient way, assume you took a series of 10 bets, and went all in (f = 1). Everything else about the problem is unchanged. Unless you win every round (probability of \(0.5^{10} = 1/1024\)) and end up with \(3^{10}\) dollars, you will end up with zero (probability = 1023/1024). However, the mean payout \(\langle a_n \rangle\) = 57.7 dollars, which obscures reality since the geometric mean and the median are essentially 0.

This is like playing a lottery.

If we use a smaller \(f\), we trade return for smaller variance. We are being more conservative when we do this.

Let's pick \(f = 0.05 < f^*\) to demonstrate this. For ease of comparison, I've kept the scale of the plot the same as the previous one.
f = 0.05
Now \(f = 0.6 > f^*\), shows how the downside can catch up with us, if we get too risk-on. Lot's of disasters.
f=0.6
This post has gone on too long, so I'll discuss the derivation of the criterion in a following post.

No comments: