Reward Shaping for Reinforcement Learning with Omega-Regular Objectives



Hahn, EM, Perez, M, Schewe, S ORCID: 0000-0002-9093-9518, Somenzi, F, Trivedi, A and Wojtczak, D ORCID: 0000-0001-5560-0546
(2020) Reward Shaping for Reinforcement Learning with Omega-Regular Objectives.

[img] Text
2001.05977v1.pdf - Submitted version

Download (91kB) | Preview

Abstract

Recently, successful approaches have been made to exploit good-for-MDPs automata (B\"uchi automata with a restricted form of nondeterminism) for model free reinforcement learning, a class of automata that subsumes good for games automata and the most widespread class of limit deterministic automata. The foundation of using these B\"uchi automata is that the B\"uchi condition can, for good-for-MDP automata, be translated to reachability. The drawback of this translation is that the rewards are, on average, reaped very late, which requires long episodes during the learning process. We devise a new reward shaping approach that overcomes this issue. We show that the resulting model is equivalent to a discounted payoff objective with a biased discount that simplifies and improves on prior work in this direction.

Item Type: Article
Uncontrolled Keywords: cs.LO, cs.LO, cs.LG
Depositing User: Symplectic Admin
Date Deposited: 26 Jan 2021 10:32
Last Modified: 18 Jan 2023 23:02
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3114824