### Archive

Posts Tagged ‘games of chance’

## On a Battle Markov Chain Model

About a month ago I befriended Dan, from Giant Battling Robots, online, as he was doing investigations into Lanchester's differential equations to model the outcome of battles, particularly those of videogames.  He sent me this very interesting link that describes a Markov chain model for battles, since he saw that I was trying figure out a way to coalesce Markov reasoning with Lanchesterian DEs.  The pdf describes quite a different approach that I've been taking or investigating, but I thought it would be interesting nevertheless to attempt and illustrate it by example.  The subject of this post is therefore expository, based on Dan's link, but the example and explanation is entirely mine, and so any error in concept is my own.

Suppose BLUE and GREEN are feuding - in fact, battling to the death, each possessing the same or differing technologies for their engagements.  The particular technologies translate to a probability of kill of the other's units.  For example, GREEN may have bazooka technology that assures that, when launched, the probability of kill/disabling of its enemy unit (perhaps a tank) is 0.45, where BLUE's tanks may kill bazooka-launchers with probability 0.55.  These probabilities represent the fact that, if an encounter is to occur at a time step, in 55 percent of encounters the tanker (BLUE) is successful in killing or disabling GREEN's unit;  on the other hand, it is unsuccessful in 45 percent of encounters.

For a battlefield in a given area, depending on hiding places, like trees, or forests, or caves (terrain), the probability of a tank or a bazooka-launcher to encounter at that time step is an exponentially distributed random variable.  What this means is basically this:  a single BLUE and a single GREEN will find each other with a given (expected probability) at a given time-step, and the higher probabilities of encounter are less likely than the lesser ones for any time-step (I may be having a conceptual problem here, since, an exponentially distributed random variable has a domain from 0 to infinity, where of course the probability space is 0 to 1). So in other words, the probability of encounter of a GREEN with a BLUE, let's say, is 0.87 on expectation.  It's a fuzzy 0.87, because in the next time step it may be less or more, but it's 0.87 on average.

Now, since the probability of finding each other and the probability of winning an encounter (we assume 1-on-1) is independent, then the probability of killing the opposing unit is simply the probability of encounter times the probability of succeeding in the kill.  If BLUE has 4 units and GREEN 3, our initial state, the probability of transitioning to state (3,3) is 0.39: 0.87, the (expected) encounter probability at the time step, times 0.45, the success probability of GREEN.  Notice we allow for only ONE encounter (or NONE) at any time step: this makes sense if the time-steps are sufficiently small and resolution of the encounter, if there is one, is instantaneous.  The probability of transitioning to state (4,2) is 0.48: the expected probability of encounter, 0.87, times the probability of success by BLUE, 0.55.  Finally, the probability of staying in state (4,3) is 1-.39-.48 = 0.13.

Neglecting the natural variations in our expected encounter probability, we can now craft the following chart that describes the transition probabilities to each state succinctly.  Yellow cells are possible states.  Directly to the left of a state is the probability of BLUE being killed one unit, and to the bottom the probability of GREEN being killed one unit at the next time-step.  The diagonal probability is the probability of remaining in that state at the next time-step:

The battle ends whenever one of the coordinates of our state is 0: for example, (0,3), or (0,1), or (2,0)...  Clearly, state (0,0) is unreachable: there must be at least one unit that remains in the end.  Of course, in this model, who wins in the end does not only depend on the technology constants (probability of kill or success), but on numerical superiority: namely, initial conditions or the initial state coordinate.

It's difficult to order the states in any meaningful way if they are Cartesian coordinates like in this example.  I used a diagonal method to craft the transition probabilities in matrix form.  Orange states are absorbing:

As in the usual Markov chain treatment, the nth power of this matrix indicates the transition probabilities at the nth time step.  After many time steps, if the transition probability matrix possesses certain properties, like regularity, then this basically means that the difference between a matrix at the nth step and the (n+1)th is negligible: all entries are "stable."  At the same time, the initial vector state (in our case 0 everywhere except 1 at state (4,3)) morphs into (converges to) a stable steady-state vector (while going through several intermediate vectors).  This stable vector indicates the probability of each outcome after a very long time: it indicates the proportion of time we will see GREEN or BLUE win after considering every possible battle, even the very long ones; so it is indeed a true probability of win.  For example, the following chart

shows that, if we simulate battles with identical initial states and transition probabilities and up to 20 time-steps, after 20 time-steps, BLUE is more likely to have won the majority of simulations (by obliterating the opposition, and remaining with 1, 2, 3 or 4 units up in the end).  Notice the fix proportion of wins for BLUE and for GREEN.  Else think about it this way.  Let U represent the battle is undecided at that time step, and B that BLUE has won at that time-step.  B will continue to be the state of the battle after it has become B for the first time (the battle is now over, so B remains the state of the system).  G indicates likewise that GREEN has won at that time step, and an infinite string of Gs after the first G indicates that G will continue to be the state of the battle after GREEN has won:

Simulation 1:  UUBBBBBBBBBBB....

Simulation 2: UUUUUUUGGGGG.....

etc.

We will have strings of Us followed by either B or G but not both, infinitely.  If we count the proportion of Us at the first time step (1st position of the string) across all simulations, this corresponds to the chart's 1 proportions.  If we count the proportion of Gs and Bs at the 20th step across all simulations, this corresponds to the chart's 20th step proportions.  The higher the time-step will indicate the true probability of win after most battles are (long) over, and this probability is stable/convergent.  In my particular example, most battles are over by the time they reach the 20th time step, and BLUE has won most battles, no matter how short or long they took.

I have crafted an .xls file that might make things clearer.  The items that you may want to modify are in grey, and the chart is there too, in a different sheet.  For the (grey) state vector, put a 1 in the initial state and 0 everywhere else; otherwise make sure you enter proportions and that they add up to 1 in the end. I only did the chart with states up to (4,3), but you can probably extrapolate the idea for states that have more than 4 BLUE units and 3 GREEN ones.  You can probably use this chart for smaller dimension games without much effort, too.

As always, comments or questions are welcome!

Enjoy!

MarkovBattle

Categories:

## On the National Mexican Lottery, II

The jackpot this week is up at 300 million (MXP)! A few million more and this game turns into a fair or favorable game, how cool!

In my last post, I mentioned that some of my friends and students said that one could be really lucky and win the jackpot if one bought the first few tickets and won:

"Unless he got truly lucky and won the jackpot before he spent too much, as in buying the first few tickets, and then quit, they argued!"

And there wasn't really much I could say about it.  It is true after all that one could be so lucky, with minuscule probability.  However, "what is the average number of tickets you have to buy before you win the first time, if you buy the 6-choice/7-choice/etc. repeatedly?" was a question that people kept asking me with some insistence.  Although to me it seemed somewhat evident that you needed to buy about $\binom{56}{6}$ tickets on average for the 6-choice, my friends and students weren't convinced until I showed them the mathematics that supported this.

It is really not difficult to calculate such if one understands what expected value is.  So let us assume the words

LLLLLLLLLW

LW

W

LLLLLLLLLLLLLLW,

etc., are Bernoulli-trial strings (there really are two possibilities, the binary win or lose), and they are allowable if we stop after the first win.  For the 6-choice, each word has probability $(1 / \binom{56}{6}) \cdot (1 - (1 / \binom{56}{6}))^{n-1}$ because the nth win is preceded by $n - 1$ losses.

The expected value is:

$(1 / \binom{56}{6}) \sum_{n=1}^{\infty} n \cdot (1 - (1 / \binom{56}{6}))^{n-1}$

One recognizes this as a convergent geometric series* (all probabilities are less than one so they lie inside the radius of convergence), and thus the above equals

$(1 / \binom{56}{6}) \cdot \frac{1}{(1 / \binom{56}{6})^2} = \binom{56}{6}$,

the sum having been substituted adequately.  Confirming my "far-out" claim (to me really unsurprisingly), you have to wait an average of $\binom{56}{6}$ or about thirty-two million tickets before you'll see the first win.

This idea can be extended for the 7, 8, 9, 10-choice and so on.

*NB

The series representation of the function

$\frac{1}{(1-x)^2} = \sum_{n=0}^{\infty} n \cdot x^{n-1}$ with radius of convergence $-1 < x < 1$.

This is obtainable by taking the derivative of the series representation for:

$\frac{1}{1-x} = \sum_{n=0}^{\infty} x^n$ with radius of convergence $-1 < x < 1$.

Categories:

## On the National Mexican Lottery, I (a Cool Combinatorial Identity)

I sometimes help people to prepare for any of the plethora of standardized tests required for everything academic, and one of my students (now applying for a Fulbright), also a high school friend, while on the improbable topic (since it scarcely appears in the general exams) of probability, asked me about the likelihood of winning the most popular game of chance by the National Mexican Lottery (Julio is naturally curious, but these hard times surely provide an additional motivation!).  In the beginning he phrased it thusly: you have a piece of paper with fifty-six numbers and you can pick six of them.  Then, at the lottery, if the six balls match your chosen six, you win the grand prize.  Naturally, I replied almost without thinking, that the probability of winning was

$\frac{1}{\binom{56}{6}}$,

or one in about thirty million.

This follows from basic considerations in combinatorics and probability: suppose you can fill six slots. In the first slot, you can place any one of the 56 numbers.  Having chosen one number in the first slot, there are 55 left that can go into the second slot and so on... 54... 53, 52, and finally 51 remaining numbers can go in the last slot.  By the counting principle, the number of arrangements is simply then $56 \cdot 55 \cdot ... \cdot 51 = \frac{56!}{50!}$.  Since the ordering doesn't matter, and there are 6! ways in which, having chosen a particular configuration of six, such can be ordered differently, the total number of arrangements must be modded by 6!.  Thus, we obtain $\frac{56!}{50!6!}$ possible outcomes.  This is really the definition of the combinatorial operation "choose:"  $\binom{n}{s} = \frac{n!}{s! (n-s)!}$.

My friend Julio then told me that there was the option of buying an extra choice.  In other words, rather than the 6 basic choices available, you could purchase 7 (the lottery would still pick 6 of 56 balls).  In fact, he said, you can purchase 8, 9, and even 10 choices.  He wanted to know how much his probability of win had increased in each of the cases.  He was thinking of buying a ticket or several, especially because at the time (last week or so) the jackpot was 206 million pesos or about 18 million dollars (now it's at 240 mill MXP, or about 21 mill USD, having there been no winner), and was interested in knowing what strategy maximized his probability of winning.  I thought to myself: "Hmm... lotteries aren't usually fair games... but let's try it out and see if we can figure something with the power of mathematics."  Admittedly at first I was stumped... I had to think this through a bit! It wasn't as obvious as you might have thought from the first derivation.  In the end I reasoned it as follows:

The fact that now I can purchase 7 options means, by the above reasoning that I can choose in  $\binom{56}{7}$ ways seven numbers, ordering not mattering.  This is my sample space.  Now, the thing is that out of these, there are six set or marked-for-win balls, so if I assume I have actually chosen them and I will win, there is a remaining one-number that can be wrong, and it can be any of 50 numbers.  There are $\binom{50}{1}$ to choose such.  My probability of win is therefore $\frac{\binom{50}{1}}{\binom{56}{7}}$.

For eight options, a similar argument means that my sample space is $\binom{56}{8}$.  With six balls set to win, I have two balls that can be wrong, or $\binom{50}{2}$ ways in which I can be right.  The probability of winning in this case is $\binom{50}{2}/\binom{56}{8}$.

For nine options, the probability is $\binom{50}{3}/\binom{56}{9}$, and for ten it is $\binom{50}{4}/\binom{56}{10}$.

Generally speaking and by the above argument, I thought, if I have a set of n balls to choose from, with s marked to win, and the possibility of choosing r, with $r \geq s$, it must be that the probability of win is therefore:

$\binom{n-s}{r-s} / \binom{n}{r}$        (A)

Happy at my apparent triumph, it never occurred to me that there was another way to argue the matter at all.  In fact I would soon discover there was a simpler way to think about it!  I began concocting a table of probabilities to show Julio, and, as my sister does, she meanders circuitously and then into my room, finally asking about what I'm doing.

A genius engineer like she is, my sister tends to think of things in a lot simpler and efficient ways than I can ever possibly.  I think it is a blessing to have someone like my sister.  In so many ways she's very much like me, but also so dissimilar, and so she comes up with different considerations on a problem... such as sometimes lead to tiny discoveries, like the identity I'll be proving in a bit.  By working on the problem of winning probabilities, she argued the following: there are $\binom{56}{6}$ possible outcomes of choosing six balls from fifty six.  If I have seven slots to choose six correct balls, then there are $\binom{7}{6}$ ways I can do this.  If there are eight slots, there are $\binom{8}{6}$ ways to do this, and so on.

In general, my sister's argument for the probability of win can be expressed as:

$\binom{r}{s} / \binom{n}{s}$       (B)

The most interesting thing about this exchange is that it happened in a matter of minutes... such is opportunity.  Thrilling, amusing, and... evanescent.  Kind of like life.

The Pasquali-Pasquali combinatorial identity.  I'm calling it like this temporarily because, despite my efforts, I have been unable to find it explicitly in this form in either combinatorics or probability texts.  Unfortunately, I haven't access to scholarly mathematics magazines, but I'm much grateful to my readership if they would point me to a proper reference.  In the meantime, it's nice that this identity has its motivation in a real-world combinatorial argument.

$\binom{n-s}{r-s} \cdot \binom{n}{s} = \binom{n}{r} \cdot \binom{r}{s}$, with $n \geq r \geq s \geq 0$.

Proof.

By the definition of the choice operation,

$\binom{n-s}{r-s} \cdot \binom{n}{s} = \frac{(n-s)!}{(r-s)!(n-r)!} \cdot \frac{n!}{s!(n-s)!} = \frac{n!}{(r-s)!(n-r)!s!} =$

$= \frac{n! r!}{r!(n-r)!s!(r-s)!} = \frac{n!}{r!(n-r)!} \cdot \frac{r!}{s!(r-s)!} = \binom{n}{r} \cdot \binom{r}{s} \verb| | \Box$

I am sure I have read this interesting datum about the National Mexican Lottery somewhere, but I cannot pinpoint exactly from what book: it is the number one (or two) source of income of the Mexican government.  Everybody plays this game of chance, in the hopes of becoming millionaires from one day to the next; such is the Mexican inclination, such is the Mexican character: sensation-craving.  A real desire to change an otherwise ordinary existence.

Although it is true, as will be seen in the file I'll be linking to, that the probability of win is increased by 7 times if one purchases the 7th extra choice (as compared to the base case of 6 choices), by 28 times if one purchases the 8th choice, 84 times for the 9th, 210 times for the tenth, and so on... this can only be leading or encouraging pieces of information (in fact the National Lottery publishes these probabilities in the hopes of persuading people to play).  Firstly, it is still extremely improbable to win. Secondly, what one must really focus on is expected profit.  Negative expected profit means that, if you play repeatedly, on average, you will be losing money despite the occasional win (!).  Such are called "unfair" games, because money goes out of your pocket and into the coffers of the House. "Fair" games are those in which the expected profit is equal to zero, and "favorable" if expected profit is positive, as you are in effect winning money on average.

The point at which this particular game is "fair" is in reality determined by the cost of the ticket.  The normal 6 choice ticket is 15 pesos, but the expected profit on the ticket is about -9 pesos:

$P = (1 / \binom{56}{6}) \cdot (206,000,000 - 15) + (1 - (1 / \binom{56}{6})) \cdot (-15)$

To be fair, the jackpot would have to be around 488 million pesos:

$0 = (1 / \binom{56}{6}) \cdot (J - 15) + (1 - (1 / \binom{56}{6})) \cdot (-15)$

Above 488 million, it is really to your advantage to buy as many (different-combination) tickets as possible.  My suggestion to Julio and some other very interested friends was to wait until the jackpot accumulated about 488 million pesos, so that they would have a real chance of earning some money.

Julio conceded, but my not very mathematically inclined friends and students complained.

First of all, of course it would never reach that much!  It has never been 206 mill (let alone now 240 mill MXP), and someone was SURE to buy tickets until they got it.  This was a golden opportunity, you see.  I replied that they forgot what expected profit meant: the individual in question would spend more money than the jackpot before he won, most probably, and that if he kept at it provided the same jackpot on average no matter how many times he won he would still be losing money.  Unless he got truly lucky and won the jackpot before he spent too much, as in buying the first few tickets, and then quit, they argued!  I had to concede, but I also offered another solution: boycott this particular game of chance until the ticket cost descends to a price that would make the game fair.  In this case, the jackpot would have to remain at 206 mill (or 240 mill this week), and the ticket price would have to go down to about 6 (and something) pesos.

Everyone groaned!  I was in effect suggesting not to play the game, but that was not it at all.  I was merely suggesting playing the game when circumstances were more favorable, or to go into the game with the mindset of not winning.  "The game is a game of chance you are sure to lose.  Buy the ticket for social reasons, because your friends are doing it, because you like the thrill of choosing 6 numbers... or what have you.  But not because you think you're lucky and you are sure you will win, because in effect it's exactly the opposite.  The National Lottery is smart!"  My statement and somewhat lopsided grin allowed my friends a way out.

They played, and lost.

Would it have been better if they had purchased 7, 8, 9, or 10 options? The answer is no.  The price of buying an extra option is determined by the size of the fair jackpot, in this case about 488 mill (based on the 15 peso 6 choice ticket), and is proportional to the probability of winning:

Example, determining the 7 choice fair price at 488 mill (based on the 15 peso 6 choice ticket):

$0 = (\binom{7}{6} / \binom{56}{6}) \cdot (488,000,000 - T_7) + (1 - (\binom{7}{6} / \binom{56}{6})) \cdot (-T_7)$

Buying 7, 8, 9 or 10 options has an escalating negative expected profit respectively, at 206 mill (at anything less than 488 mill, really).  It's like being penalized more harshly and more harshly for wanting to better your chances of win!

Example, determining the expected profit having solved for $T_7$ above, 7 choice ticket:

$P = (\binom{7}{6} / \binom{56}{6}) \cdot (206,000,000 - T_7) + (1 - (\binom{7}{6} / \binom{56}{6})) \cdot (-T_7)$

So you really are better off and losing less money if you play the 6-choice for fun  (actually as infrequently as possible).  Since you are going to lose anyway, better lose less money than more money, is what I say!

Categories: