Approximation guarantees for fictitious play
Fictitious play is a simple, well-known, and often-used algorithm for playing (and, especially, learning to play) games. However, in general it does not converge to equilibrium; even when it does, we may not be able to run it to convergence. Still, we may obtain an approximate equilibrium. In this paper, we study the approximation properties that fictitious play obtains when it is run for a limited number of rounds. We show that if both players randomize uniformly over their actions in the first r rounds of fictitious play, then the result is an ∈-equilibrium, where ∈ = (r + 1)/(2r). (Since we are examining only a constant number of pure strategies, we know that ∈ < 1/2 is impossible, due to a result of Feder et al.) We show that this bound is tight in the worst case; however, with an experiment on random games, we illustrate that fictitious play usually obtains a much better approximation. We then consider the possibility that the players fail to choose the same r. We show how to obtain the optimal approximation guarantee when both the opponent's r and the game are adversarially chosen (but there is an upper bound R on the opponent's r), using a linear program formulation. We show that if the action played in the ith round of fictitious play is chosen with probability proportional to: 1 for i = 1 and 1/(i-1) for all 2 ≤ i ≤ R+1, this gives an approximation guarantee of 1-1/(2+lnR). We also obtain a lower bound of 1 - 4/ ln R. This provides an actionable prescription for how long to run fictitious play. ©2009 IEEE.
2009 47th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2009
Start / End Page
Digital Object Identifier (DOI)