# Poker Video: Misc/Other by sthief09 (Micro/Small Stakes)

## Applied Math: Episode Two

Get the Flash Player to see this player.

### Applied Math: Episode Two by sthief09

Sthief09 continues working on EV tables to develop strategies based on opponent tendencies. This video focuses on creating and analyzing ranges, and gives viewers a better understanding of the true meaning behind buzzwords such as "balance", "exploitable", and "polarized." This video applies to all games and structures.

You know the numbers behind poker are important, but don't know where to start. Or maybe you know the basics of poker math but aren't able to apply them to improve your game. If either of these describes you, you're in the right place. For this series, sthief09 will teach you the basics and help you transform them into a way to test theories, try out alternative lines, and get a better understanding of the numbers that are the driving force behind poker.

### Video Details

• Game:
• Stakes: Micro/Small Stakes
• 71 minutes long
• Posted almost 2 years ago

## Comments for Applied Math: Episode Two

or track by Email or RSS

#### soleztis

DC Dalai Lama
1019 posts
Joined 09/2010

F4 worked! thank you! that drove me nuts.

The student becomes the teacher. Haha. I couldn't help that one. Glad I could give a little something back. Awesome stuff so far.

#### duvelman

7 posts
Joined 02/2009

Hey Josh, I think you'll find that your 13-card river-bet game has a Nash Equilibrium in which both players play an optimal non-exploitable strategy. Your uncertainty as to whether playing the Psychology strategy (Option 2) goes on forever or stops at some point amounts to asking whether the Nash Equilibrium is stable (in which case it stops asymptotically) or unstable (in which case it goes on forever). My guess is that for this game it is stable so the players should converge on the optimal non-exploitable strategy. Of course, if they were that good both of them will have started at that strategy...

My intuition tells me that there does not exist a stable Nash Equilibrium for this game. But this is probably driven by the fact that I do not directly see what the strategies would be in this equilibrium and hence assuming that they are fairly complex so that you won't end up with such a strategy by simply maximizing EV.
Could you add any motivation to your guess of why it would be stable? Just out of curiosity, you don't have to feel obliged to actually answer ;-).

Josh, very interesting series btw!

#### sthief09

2131 posts
Joined 07/2007

Hey Josh, I think you'll find that your 13-card river-bet game has a Nash Equilibrium in which both players play an optimal non-exploitable strategy. Your uncertainty as to whether playing the Psychology strategy (Option 2) goes on forever or stops at some point amounts to asking whether the Nash Equilibrium is stable (in which case it stops asymptotically) or unstable (in which case it goes on forever). My guess is that for this game it is stable so the players should converge on the optimal non-exploitable strategy. Of course, if they were that good both of them will have started at that strategy...

Good stuff with the spreadsheet tips. Being familiar with both Excel and OOC myself, I was already using locked cells and putting formulae in your tables with what card the Hero has as a variable. Working out the formulae with MIN and MAX functions was a little tricky, but is guaranteed to produce fewer errors than the hand entered values which you have been demonstrating. I understand however the need to keep things simple for the video - we are learning Poker and not spreadsheets after all!

My main request for the rest of the series is to use the results to demonstrate how to exploit typical weaknesses in opponents and to put numbers on it. Something beyond "they call too much so I should value bet more" and "they fold too much so I should bluff more". The mathematics of barreling multiple streets and what the optimal frequencies are for doing that and defending against it would also be interesting - I think that may show up something about polarized v balanced ranges but I'd like to see it in hard numbers.

I'm sorry I didn't respond to this sooner but honestly it took me reading it several times to fully digest. It's a very intriguing post. I need to start reading around about game theory. I'm sort of learning about it in reverse. I'm not getting any formal education about it, but by running the numbers I'm starting to understand the inner workings of it.

#### duvelman

7 posts
Joined 02/2009

I couldn't resist myself, so I quickly wrote some lines of code to see what would happen if the psychology option goes on forever. Apparently, it quickly starts alternating between a polarized range and value betting. The specific range and average EV are a bit dependent on which strategy you follow when Betting and Checking is neutral. Villain always starts iterating between calling a 5 or higher on the one hand to bluff catch and only calling Q or higher on the other hand

#### sthief09

2131 posts
Joined 07/2007

I couldn't resist myself, so I quickly wrote some lines of code to see what would happen if the psychology option goes on forever. Apparently, it quickly starts alternating between a polarized range and value betting. The specific range and average EV are a bit dependent on which strategy you follow when Betting and Checking is neutral. Villain always starts iterating between calling a 5 or higher on the one hand to bluff catch and only calling Q or higher on the other hand

Wow how did you do this?

#### WrongName

4 posts
Joined 03/2010

A nash equilibrium should look like this:
- Player 2 has to call exactly 50% of his range to make player 1 indifferent to bluffing. So player 2 calls 100% with A-9 and 50% with an 8.
- Now player 1 can profitably valuebet with A,K,Q and to balance this bluffs with a 2 100% and a 3 50% (the strategy you described in the video).

Neither player can increase his EV by changing his strategy without getting exploitable again. Is this correct or did I miss something?

#### duvelman

7 posts
Joined 02/2009

Wow how did you do this?

Well, I basically did the same thing you did in the spreadsheet but then in Matlab. I automated it so that it calculates EV given a fold/call range for villain and constructs a new bet/check range for hero. I've also written a method that calculates a new range for villain given a range for hero, so I can iterate this as many times as I want. I could also post the matlab files if you want

#### duvelman

7 posts
Joined 02/2009

A nash equilibrium should look like this:
- Player 2 has to call exactly 50% of his range to make player 1 indifferent to bluffing. So player 2 calls 100% with A-9 and 50% with an 8.
- Now player 1 can profitably valuebet with A,K,Q and to balance this bluffs with a 2 100% and a 3 50% (the strategy you described in the video).

Neither player can increase his EV by changing his strategy without getting exploitable again. Is this correct or did I miss something?

Mmm, I'm confused now. Why do you add the "without getting exploitable again". Player 2 strategy is exploitable. He calls with his top 50%, so player 1 can start only valuebetting (namely A,K,Q) and his EV will be higher than with the strategy you've given. Of course, this strategy can be exploited as well but that is why there is imo no simple nash equilibrium and the players keep iterating between value betting and playing a polarized range.

#### WrongName

4 posts
Joined 03/2010

You are right, forget the "without getting exploitable again" part. But still, player 2's strategy is not exploitable. If player 1 stops bluffing, his EV will not go up because bluffing against player 2 has 0 EV. He has to bluff with his weakest hands though to prevent getting exploited himself.
Maybe you could try to run your code with my strategies as initial strategies. If I am right the strategies shouldn't change no matter how many iterations you do.

#### duvelman

7 posts
Joined 02/2009

Ok, I added it to the page. But as indicated in my previous post, although Hero's range is unexploitable, he can exploit Villains range by start to only value bet A,K,Q. That is the difference between option 2 (the psychology option) and option 3 from Josh's video. In the simulations I did the same as in the video series: find the strategy with the highest EV. It is of course also possible to choose the strategy which is not unexploitable, i.e. find the strategy for which the EV is the highest assuming villain plays the optimal counter strategy. If I would do that than the strategies would indeed stay the same, but you don't need iterations for this because this strategy is independent of the current strategy of the other player. It might actually be interesting to compare the EV of that strategy with the average EV from option 2

#### WrongName

4 posts
Joined 03/2010

Well I finally found my mistake. I thought player 2 is unexploitable if the EV of bluffing for player 1 is zero. But because checking has some EV, we need to find a calling range for player 2 where bluffing with a '3' has the same EV as checking. If this is the case player 1 can chose to bluff 50% with it and check 50% and play a balanced strategy.

- The calling range for player 2 looks like this: A,K,Q,J,T (100%) and 9 (50%).
- Now I calculated the optimal counter strategy for player 1: BET(2), NEUTRAL(3), CHECK(4,5,6,7,8,9,T,J), BET(Q,K,A)
- To be balanced he choses: bet A,K,Q,2 (100%) and 3 (50%).
- Calculating the EV for player 2 again: CALL(A,K,Q), NEUTRAL(J,T,9,8,7,6,5,4), FOLD(3,2)
- Player 2 is unable to improve his EV, because what only matters is calling with A,K,Q and folding 3,2 (he already does that)
- Neither player can improve his EV -> Nash equilibrium

#### duvelman

7 posts
Joined 02/2009

yes, you are correct but given that player 2 plays CALL(A,K,Q), NEUTRAL(J,T,9,8,7,6,5,4), FOLD(3,2), player 1 (hero) can improve his EV by starting to valuebet again. Since villain calls down to a 4 50% of the time, there is no value in having a polarized range. So he will stop bluffing with the 2 (and the 3 50% of the time) and will only bet A-J and T 50% of the time.

I also checked it with the matlab code and it says the same. After this, the same pattern arises than with other starting ranges but again switching between value and polarized.

#### WrongName

4 posts
Joined 03/2010

yes, you are correct but given that player 2 plays CALL(A,K,Q), NEUTRAL(J,T,9,8,7,6,5,4), FOLD(3,2), player 1 (hero) can improve his EV by starting to valuebet again. Since villain calls down to a 4 50% of the time, there is no value in having a polarized range. So he will stop bluffing with the 2 (and the 3 50% of the time) and will only bet A-J and T 50% of the time..

Wait, why do you say villain calls down to a 4 50% of the time? Villain/player2 uses the unexploitable strategy of calling A,K,Q,J,T (100%) and 9 (50%). NEUTRAL doesn't mean he calls 50% of the time. NEUTRAL means calling/folding has the same EV, so it doesn't matter what he does against the SPECIFIC strategy of player 1. But if he wants to be unexploitable, he must choose a specific strategy (always calling J,T and calling 9 half the time, folding the rest of the NEUTRAL hands). Of course player 2 could use other strategies against player 1 which have the same EV, but he choses the one which is balanced in order to keep himself from getting exploited.

You can test it yourself. Player 2 starts with the strategy above (calling A,K,Q,J,T (100%) and 9 (50%). Player 1 now wants to find a strategy which maximizes his EV against that strategy. The answer is: BET(A,K,Q,2), CHECK(J,T,9,8,7,6,5,4), NEUTRAL(3). Because it doesn't matter what he does with the '3', an infinity number of strategies exists. He now chooses the one that makes him unexploitable (betting the '3' half the time). Now player 2 cannot improve his EV against the new strategy of player 1 because he already plays perfectly against the strategy of player 1. The strategy CALL100%(A,K,Q), CALL50%(J,T,9,8,7,6,5,4), FOLD(3,2) would have the same EV as the one he started with but would make him exploitable (as you described in your post).

I don't know how to explain it better and English is not my first language. Maybe I made a mistake somewhere but I doubt it.

#### duvelman

7 posts
Joined 02/2009

yes, of course. That was silly of me, just was confused by the neutral for a second. Sometimes it helps to think ;-). I thought at first as well that the strategy you've given would give a nash equilibrium but the outcome of the simulations confused me. I forgot that for villain I chose the call 50% if neutral which of course he doesn't have to.

Thanks for the explanation!

#### AlephOne

9 posts
Joined 02/2009

My intuition tells me that there does not exist a stable Nash Equilibrium for this game. But this is probably driven by the fact that I do not directly see what the strategies would be in this equilibrium and hence assuming that they are fairly complex so that you won't end up with such a strategy by simply maximizing EV.
Could you add any motivation to your guess of why it would be stable? Just out of curiosity, you don't have to feel obliged to actually answer ;-).

Interesting stuff between you (duvelman) and WrongName based on simulation and the actual poker strategies. I am probably stronger in Maths than at Poker, but further thought on the subject leaves me convinced that the Nash Equilibrium (which definitely exists) is either stable or neutral. The intuition comes from the linearity of all the terms involved. If the equilibrium is actually neutral, simulations will run into trouble because oscillations can be induced by the simulation which don't exist in theory (but do, of course, exist in reality because real players really are imperfect simulations of the theory). By the way the linearity of all the terms that I am talking about disappears in real Poker because the ranking of Hands is (non-linearly) altered from street to street.

Back at University I did some great simulations of simple predator-prey relationships which shows beautiful boom-bust cycles that are (mathematically) chaotic in nature. Reminds me of Poker a lot! Especially Tournament Poker :-(

HomePoker ForumsGeneral Poker Discussion → Applied Math : Episode Two