Optimal and approximate Q-value functions for decentralized POMDPs



Oliehoek, Frans A ORCID: 0000-0003-4372-5055, Spaan, Matthijs TJ and Vlassis, Nikos
(2008) Optimal and approximate Q-value functions for decentralized POMDPs. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 32. pp. 289-353.

[img] Text
1111.0062v1.pdf - Unspecified

Download (537kB)

Abstract

Decision-theoretic planning is a popular approach to sequential decision making problems, because it treats uncertainty in sensing and acting in a principled way. In single-agent frameworks like MDPs and POMDPs, planning can be carried out by resorting to Q-value functions: an optimal Q-value function Q* is computed in a recursive manner by dynamic programming, and then an optimal policy is extracted from Q*. In this paper we study whether similar Q-value functions can be defined for decentralized POMDP models (Dec-POMDPs), and how policies can be extracted from such value functions. We define two forms of the optimal Q-value function for Dec-POMDPs: one that gives a normative description as the Q-value function of an optimal pure joint policy and another one that is sequentially rational and thus gives a recipe for computation. This computation, however, is infeasible for all but the smallest problems. Therefore, we analyze various approximate Q-value functions that allow for efficient computation. We describe how they relate, and we prove that they all provide an upper bound to the optimal Q-value function Q*. Finally, unifying some previous approaches for solving Dec-POMDPs, we describe a family of algorithms for extracting policies from such Q-value functions, and perform an experimental evaluation on existing test problems, including a new firefighting benchmark problem.

Item Type: Article
Uncontrolled Keywords: cs.AI, cs.AI
Depositing User: Symplectic Admin
Date Deposited: 21 Apr 2016 09:16
Last Modified: 16 Dec 2022 00:07
DOI: 10.1613/jair.2447
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3000372