How to Play in Infinite MDPs (Invited Talk)



Kiefer, Stefan, Mayr, Richard, Shirmohammadi, Mahsa, Totzke, Patrick ORCID: 0000-0001-5274-8190 and Wojtczak, Dominik ORCID: 0000-0001-5560-0546
(2020) How to Play in Infinite MDPs (Invited Talk). .

[thumbnail of !!how-to-play-infinite-mdp.pdf] Text
!!how-to-play-infinite-mdp.pdf - Published version

Download (573kB) | Preview

Abstract

Markov decision processes (MDPs) are a standard model for dynamic systems that exhibit both stochastic and nondeterministic behavior. For MDPs with finite state space it is known that for a wide range of objectives there exist optimal strategies that are memoryless and deterministic. In contrast, if the state space is infinite, optimal strategies may not exist, and optimal or ε-optimal strategies may require (possibly infinite) memory. In this paper we consider qualitative objectives: reachability, safety, (co-)Büchi, and other parity objectives. We aim at giving an introduction to a collection of techniques that allow for the construction of strategies with little or no memory in countably infinite MDPs.

Item Type: Conference or Workshop Item (Unspecified)
Additional Information: urn: urn:nbn:de:0030-drops-124103
Depositing User: Symplectic Admin
Date Deposited: 28 Aug 2020 09:12
Last Modified: 18 Jan 2023 23:36
DOI: 10.4230/LIPIcs.ICALP.2020.3
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3098914