On gradual-impulse control of continuous-time Markov decision processes with exponential utility



Guo, X, Kurushima, A, Piunovskiy, A ORCID: 0000-0002-9683-4856 and Zhang, Y ORCID: 0000-0002-3200-6306
(2021) On gradual-impulse control of continuous-time Markov decision processes with exponential utility Advances in Applied Probability, 53 (2). pp. 301-334. ISSN 0001-8678, 1475-6064

[thumbnail of AAP.pdf] Text
AAP.pdf - Author Accepted Manuscript

Download (648kB) | Preview

Abstract

We consider a gradual-impulse control problem of continuous-time Markov decision processes, where the system performance is measured by the expectation of the exponential utility of the total cost. We show, under natural conditions on the system primitives, the existence of a deterministic stationary optimal policy out of a more general class of policies that allow multiple simultaneous impulses, randomized selection of impulses with random effects, and accumulation of jumps. After characterizing the value function using the optimality equation, we reduce the gradual-impulse control problem to an equivalent simple discrete-time Markov decision process, whose action space is the union of the sets of gradual and impulsive actions.

Item Type: Article
Uncontrolled Keywords: Continuous-time Markov decision processes, dynamic programming, gradual-impulse control, optimality equation
Depositing User: Symplectic Admin
Date Deposited: 14 Sep 2020 08:41
Last Modified: 21 Jan 2026 20:55
DOI: 10.1017/apr.2020.64
Related Websites:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3100817
Disclaimer: The University of Liverpool is not responsible for content contained on other websites from links within repository metadata. Please contact us if you notice anything that appears incorrect or inappropriate.