Learning in POMDPs with Monte Carlo Tree Search



Katt, Sammie, Oliehoek, Frans A ORCID: 0000-0003-4372-5055 and Amato, Christopher
(2017) Learning in POMDPs with Monte Carlo Tree Search. , 34th International Conference on Machine Learning (ICML 2017).

[img] Text
Katt17ICML.pdf - Author Accepted Manuscript
Access to this file is embargoed until Unspecified.

Download (489kB)

Abstract

The POMDP is a powerful framework for reasoning under outcome and information uncertainty, but constructing an accurate POMDP model is difficult. Bayes-Adaptive Partially Observable Markov Decision Processes (BA-POMDPs) extend POMDPs to allow the model to be learned during execution. BA-POMDPs are a Bayesian RL approach that, in principle, allows for an optimal trade-off between exploitation and exploration. Unfortunately, BA-POMDPs are currently impractical to solve for any non-trivial domain. In this paper, we extend the Monte-Carlo Tree Search method POMCP to BA-POMDPs and show that the resulting method, which we call BA-POMCP, is able to tackle problems that previous solution methods have been unable to solve. Additionally, we introduce several techniques that exploit the BA-POMDP structure to improve the efficiency of BA-POMCP along with proof of their convergence.

Item Type: Conference or Workshop Item (Unspecified)
Uncontrolled Keywords: cs.AI, cs.AI, cs.LG
Depositing User: Symplectic Admin
Date Deposited: 13 Jun 2017 07:52
Last Modified: 19 Jan 2023 07:03
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3007954