Skip to main content

Exploiting Locality of Interaction in Factored Dec−POMDPs

Frans Oliehoek‚ Matthijs Spaan‚ Shimon Whiteson and Nikos Vlassis

Abstract

Decentralized partially observable Markov decision processes (Dec-POMDPs) constitute a generic and expressive framework for multiagent planning under uncertainty, but solving them exactly is provably intractable. In this paper we demonstrate how their scalability can be improved by exploiting locality of interaction between agents by a factored representation. Factored Dec-POMDP representations have been proposed before, but only for Dec-POMDPs whose transition and observation models are fully independent. Such strong assumptions simplify the planning problem, but they result in models with limited applicability. On the other hand, we consider general factored Dec-POMDPs for which we analyze the model dependencies over space (locality of iteraction) and time (horizon of the problem). We also present a formulation of decomposable optimal and approximate value functions for our model. Together, our results allow us to exploit the problem structure as well as heuristics in a single framework that is based on collaborative graphical Bayesian games (CGBGs). Our experiments show a speedup of two orders of magnitude.

Book Title
AAMAS 2008: Proceedings of the Seventh International Joint Conference on Autonomous Agents and Multi−Agent Systems
Month
May
Pages
517−524
Year
2008