Good-for-MDPs Automata for Probabilistic Analysis and Reinforcement Learning



Hahn, Ernst Moritz ORCID: 0000-0002-9348-7684, Perez, Mateo ORCID: 0000-0003-4220-3212, Schewe, Sven ORCID: 0000-0002-9093-9518, Somenzi, Fabio ORCID: 0000-0002-2085-2003, Trivedi, Ashutosh ORCID: 0000-0001-9346-0126 and Wojtczak, Dominik ORCID: 0000-0001-5560-0546
(2020) Good-for-MDPs Automata for Probabilistic Analysis and Reinforcement Learning. In: TACAS, 2020-4-25 - 2020-4-30, Dublin.

[thumbnail of Good-for-MDPs_Automata_for_Probabilistic_Analysis_and_Reinforcement_Learning.pdf] Text
Good-for-MDPs_Automata_for_Probabilistic_Analysis_and_Reinforcement_Learning.pdf - Author Accepted Manuscript

Download (555kB) | Preview

Abstract

We characterize the class of nondeterministic ω-automata that can be used for the analysis of finite Markov decision processes (MDPs). We call these automata ‘good-for-MDPs’ (GFM). We show that GFM automata are closed under classic simulation as well as under more powerful simulation relations that leverage properties of optimal control strategies for MDPs. This closure enables us to exploit state-space reduction techniques, such as those based on direct and delayed simulation, that guarantee simulation equivalence. We demonstrate the promise of GFM automata by defining a new class of automata with favorable properties—they are Büchi automata with low branching degree obtained through a simple construction—and show that going beyond limit-deterministic automata may significantly benefit reinforcement learning.

Item Type: Conference or Workshop Item (Unspecified)
Uncontrolled Keywords: 46 Information and Computing Sciences, 4602 Artificial Intelligence, 4611 Machine Learning
Depositing User: Symplectic Admin
Date Deposited: 02 Mar 2020 13:57
Last Modified: 20 Jun 2024 20:24
DOI: 10.1007/978-3-030-45190-5_17
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3077440