THE GTO LAB

Visualizing Nash Equilibrium through CFR

Total Iterations0
Simulation Speed10x

Current Average Strategy

Situation (Card + Context)Pass / FoldBet / Call

Convergence Chart

Tracking key decision points stabilizing toward Equilibrium

What am I looking at?

This simulation uses CFR (Counterfactual Regret Minimization). The AI plays millions of games against itself. At first, its play is purely random. When it loses, it records "regret" for the actions it didn't take. Over time, it minimizes this regret by favoring better actions, eventually settling on a Nash Equilibrium—a strategy that cannot be exploited by any opponent.