Skip to main content

Coordinating Followers to Reach Better Equilibria: End-to-End Gradient Descent for Stackelberg Games

Publication ,  Conference
Wang, K; Xu, L; Perrault, A; Reiter, MK; Tambe, M
Published in: Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022
June 30, 2022

A growing body of work in game theory extends the traditional Stackelberg game to settings with one leader and multiple followers who play a Nash equilibrium. Standard approaches for computing equilibria in these games reformulate the followers' best response as constraints in the leader's optimization problem. These reformulation approaches can sometimes be effective, but make limiting assumptions on the followers' objectives and the equilibrium reached by followers, e.g., uniqueness, optimism, or pessimism. To overcome these limitations, we run gradient descent to update the leader's strategy by differentiating through the equilibrium reached by followers. Our approach generalizes to any stochastic equilibrium selection procedure that chooses from multiple equilibria, where we compute the stochastic gradient by back-propagating through a sampled Nash equilibrium using the solution to a partial differential equation to establish the unbiasedness of the stochastic gradient. Using the unbiased gradient estimate, we implement the gradient-based approach to solve three Stackelberg problems with multiple followers. Our approach consistently outperforms existing baselines to achieve higher utility for the leader.

Duke Scholars

Published In

Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022

ISBN

9781577358763

Publication Date

June 30, 2022

Volume

36

Start / End Page

5219 / 5227
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Wang, K., Xu, L., Perrault, A., Reiter, M. K., & Tambe, M. (2022). Coordinating Followers to Reach Better Equilibria: End-to-End Gradient Descent for Stackelberg Games. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 (Vol. 36, pp. 5219–5227).
Wang, K., L. Xu, A. Perrault, M. K. Reiter, and M. Tambe. “Coordinating Followers to Reach Better Equilibria: End-to-End Gradient Descent for Stackelberg Games.” In Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022, 36:5219–27, 2022.
Wang K, Xu L, Perrault A, Reiter MK, Tambe M. Coordinating Followers to Reach Better Equilibria: End-to-End Gradient Descent for Stackelberg Games. In: Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022. 2022. p. 5219–27.
Wang, K., et al. “Coordinating Followers to Reach Better Equilibria: End-to-End Gradient Descent for Stackelberg Games.” Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022, vol. 36, 2022, pp. 5219–27.
Wang K, Xu L, Perrault A, Reiter MK, Tambe M. Coordinating Followers to Reach Better Equilibria: End-to-End Gradient Descent for Stackelberg Games. Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022. 2022. p. 5219–5227.

Published In

Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022

ISBN

9781577358763

Publication Date

June 30, 2022

Volume

36

Start / End Page

5219 / 5227