A more robust synchronization for MDPs?
Mahsa Shirmohammadi ( ENS Cachan )
We consider synchronizing properties of Markov decision processes (MDP), viewed as generators of sequences of probability distributions over states. A probability distribution is p-synchronized if the probability mass is at least p in some state (or a group of states), and a sequence of probability distributions is eventually p-synchronizing, or weakly p-synchronizing if respectively eventually one, or infinitely many distributions in the sequence are p-synchronized. For each synchronizing mode, an MDP can be (i) sure winning if there is a strategy that produces a 1-synchronizing sequence; (ii) almost-sure winning if there is a strategy that produces a sequence that is, for all epsilon > 0, a (1-epsilon)-synchronizing sequence; (iii) limit-sure winning if for all epsilon > 0, there is a strategy that produces a (1-epsilon)-synchronizing sequence. For each synchronizing and winning mode, we have studied the problem of deciding whether an MDP is winning. In this talk, we presnst the memory requirement of strategies to win each mode for both eventually and weakly synchronizing objectives. Moreover, we present hints about the PSPACE membership for the problem of deciding whether an MDP is limit-sure winning for eventually synchronizing objective. We show examples and arguments to prove that winning regions of almost-sure and limit-sure eventually synchronizing do not coincide while those regions coincide for weakly synchronizing, resulting in a more robust definition for synchronization. Some part of the results of this talk is presented in FOSSACS 2014, and some of the results will be presented in CONCUR 2014.