Smooth Fictitious Play in Stochastic Games with Perturbed Payoffs and Unknown Transitions



Baudin, L and Laraki, R ORCID: 0000-0002-4898-2424
(2022) Smooth Fictitious Play in Stochastic Games with Perturbed Payoffs and Unknown Transitions. In: Thirty-sixth Conference on Neural Information Processing Systems: NeurIPS, 2022-11-28 - 2022-10-9.

[img] PDF
LarakiBaudinNeurips2022.pdf - Submitted version

Download (363kB) | Preview

Abstract

Recent extensions to dynamic games (Leslie et al. [2020], Sayin et al. [2020], Baudin and Laraki [2022]) of the well-known fictitious play learning procedure in static games were proved to globally converge to stationary Nash equilibria in two important classes of dynamic games (zero-sum and identical-interest discounted stochastic games). However, those decentralized algorithms need the players to know exactly the model (the transition probabilities and their payoffs at every stage). To overcome these strong assumptions, our paper introduces regularizations of the systems in Leslie et al. [2020], Baudin and Laraki [2022] to construct a family of new decentralized learning algorithms which are model-free (players don't know the transitions and their payoffs are perturbed at every stage). Our procedures can be seen as extensions to stochastic games of the classical smooth fictitious play learning procedures in static games (where the players best responses are regularized, thanks to a smooth strictly concave perturbation of their payoff functions). We prove the convergence of our family of procedures to stationary regularized Nash equilibria in zero-sum and identical-interest discounted stochastic games. The proof uses the continuous smooth best-response dynamics counterparts, and stochastic approximation methods. When there is only one player, our problem is an instance of Reinforcement Learning and our procedures are proved to globally converge to the optimal stationary policy of the regularized MDP. In that sense, they can be seen as an alternative to the well known Q-learning procedure.

Item Type: Conference or Workshop Item (Unspecified)
Depositing User: Symplectic Admin
Date Deposited: 01 Nov 2022 08:53
Last Modified: 14 Jul 2023 22:08
URI: https://livrepository.liverpool.ac.uk/id/eprint/3165891