GliderJan Varho

7 Card Poker Hand Evaluation

  • BlitzMax
  • C
  • card
  • data
  • evaluation
  • graph
  • hand
  • poker

Over the past few days I've looked at poker hand evaluation, specifically 7-card evaluation for games like Texas Hold'em. What I did is based on three evaluators I've studied: Cactus Kev's 5-card evaluator, Paul Senzee's 7-card hash function and the "2+2 evaluator" by Ray Wotton et. al.

First a little background: There are (52 choose 7) = c. 134 million 7 card combinations, but less than 7500 hands with a unique value as ranked by Kevin Suffecool in the above link. The problem at hand is getting from a representation of seven individual cards to a value that can be compared to other hands - and doing that fast.

Of the three evaluators, I especially like what Paul Senzee did: he created a perfect minimal hash function that maps a 52-bit integer with seven bits set to a unique number. This can be used for a single table lookup from a 134 million element table of hand values. The downsides are that the table is huge and that the hash function is slow in comparison.

The "Cactus Kev" evaluator is a 5-card evaluator, but can naively be used to find the maximum of the 21 5-card hands in a 7-card hand. That is of course even slower, but can be used in the table generation phase of another evaluator. His code can also most easily be pasted in to another project. The code files are available at the link I gave.

In practical use, the 2+2 evaluator works like magic: seven table lookups and you are done. However, the table generation code is a mess and the table is rather large at over 100 MB. In reality, the one table represents a directed acyclic graph of seven layers, with each link representing a card. This is as close to one can come to "solving" 7-card poker.

Constructing a graph for a card sequence would mean ( 52! / (52-n)! ) nodes at the nth layer; however, the order does not matter in poker hands. The idea is the same as as behind Paul Senzee's method: the nodes representing Ah7s and 7sAh can be joined. Do the math and you again have (52 choose n) nodes at layer n.

However, there are also other nodes that are equivalent. The trick to the 2+2 evaluator is in the MakeID function of the code I linked to: suits don't always matter! For example, take a four card hand AcKdQhJs. None of the suits matter, because flush is no longer possible. Joining such nodes results in dramatic space savings.

If it were a 5 card evaluator, I believe the 2+2 evaluator (a bit tweaked) would generate an optimal DAG representation for the problem. However, as it is, it ignores some redundancy: take any ready flush - the off suit cards don't matter. Similarly, some other made hands have redundancies. A more significant memory loss is due to concatenation of all tables into one.

The way I would like to generate the DAG is: First, create it for 52 choose n, taking all cards into account. Second, apply a generic procedure to join any nodes that can be joined. This would again result in an optimal DAG, though the representation can be improved. Keeping the layers as separate tables allows for more efficient storage of those tables that don't require a full 32-bit integer for their values.

Unfortunately, the final layer would require several gigabytes (12?) at first, so it is not yet practical. I had to keep the 2+2 suit hack in, ugly though it may be. After that, layers 1 to 5 can be optimized no further, but the last two can (on the order of ten percent). As far as I can tell, the algorithm for doing that has to be at least of second order with respect to the number of nodes. I did the straightforward thing and it is indeed slow, but only has to be run once.

Storing tables 2-6 using 32-bit Ints and 7 using 16-bit Shorts the total space taken is 80 MB, in contrast to over 120 MB for the original 2+2 evaluator. The first table simply contains the index multiplied by 52, so it isn't really needed. (It annoys me greatly that table 2 almost fits in Shorts.) The 7-zip compressed file I've attached contains the tables that can be used from any programming language and samples in C and BlitzMax. I'll publish the generating code once it looks presentable.

I'm rather confident an array-based DAG is the fastest way to do an exhaustive iteration of hands in order - minimal testing shows all 134 million ordered 7-card hands can be evaluated in less than a second. However, I'm unsure whether a heavily memory dependent algorithm is the best for a Monte-Carlo simulation. It would seem that a random order would favor something with fever lookups and thus cache misses. After all, tables 5-7 are all too large to fit in current generation processor caches of up to a few MB (my previous generation dual-core has 3 MB total).

Download: eval7.7z (7.8 MB 7-zip compressed archive) License: public domain (none)