Robust Temporal Difference Learning for Critical Domains



Klima, Richard, Bloembergen, Daan, Kaisers, Michael and Tuyls, Karl
(2019) Robust Temporal Difference Learning for Critical Domains. AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 1. pp. 350-358.

[img] Text
1901.08021v1.pdf - Author Accepted Manuscript

Download (682kB)

Abstract

We present a new Q-function operator for temporal difference (TD) learning methods that explicitly encodes robustness against significant rare events (SRE) in critical domains. The operator, which we call the $\kappa$-operator, allows to learn a robust policy in a model-based fashion without actually observing the SRE. We introduce single- and multi-agent robust TD methods using the operator $\kappa$. We prove convergence of the operator to the optimal robust Q-function with respect to the model using the theory of Generalized Markov Decision Processes. In addition we prove convergence to the optimal Q-function of the original MDP given that the probability of SREs vanishes. Empirical evaluations demonstrate the superior performance of $\kappa$-based TD methods both in the early learning phase as well as in the final converged stage. In addition we show robustness of the proposed method to small model errors, as well as its applicability in a multi-agent context.

Item Type: Article
Additional Information: AAMAS 2019
Uncontrolled Keywords: reinforcement learning, robust learning, multi-agent learning
Depositing User: Symplectic Admin
Date Deposited: 29 Jan 2019 16:23
Last Modified: 19 Jan 2023 01:05
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3031861