Robust Market Making via Adversarial Reinforcement Learning



Spooner, Thomas ORCID: 0000-0002-1732-7582 and Savani, Rahul ORCID: 0000-0003-1262-7831
(2020) Robust Market Making via Adversarial Reinforcement Learning. In: Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}, 2020-7-11 - 2020-7-17.

Access the full-text of this item by clicking on the Open Access link.

Abstract

<jats:p>We show that adversarial reinforcement learning (ARL) can be used to produce market marking agents that are robust to adversarial and adaptively-chosen market conditions. To apply ARL, we turn the well-studied single-agent model of Avellaneda and Stoikov [2008] into a discrete-time zero-sum game between a market maker and adversary. The adversary acts as a proxy for other market participants that would like to profit at the market maker's expense. We empirically compare two conventional single-agent RL agents with ARL, and show that our ARL approach leads to: 1) the emergence of risk-averse behaviour without constraints or domain-specific penalties; 2) significant improvements in performance across a set of standard metrics, evaluated with or without an adversary in the test environment, and; 3) improved robustness to model uncertainty. We empirically demonstrate that our ARL method consistently converges, and we prove for several special cases that the profiles that we converge to correspond to Nash equilibria in a simplified single-stage game.</jats:p>

Item Type: Conference or Workshop Item (Unspecified)
Depositing User: Symplectic Admin
Date Deposited: 18 Jan 2021 09:25
Last Modified: 15 Mar 2024 16:40
DOI: 10.24963/ijcai.2020/633
Open Access URL: https://www.ijcai.org/Proceedings/2020/0633.pdf
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3113894