Computing Probabilistic Bisimilarity Distances via Policy Iteration
- 11:00 4th May 2016 ( week 2, Trinity Term 2016 )Room 441, Wolfson Building
A transformation mapping a labelled Markov chain to a simple stochastic game is presented. In the resulting simple stochastic game, each vertex corresponds to a pair of states of the labelled Markov chain. The value of a vertex of the simple stochastic game is shown to be equal to the probabilistic bisimilarity distance, a notion due to Desharnais, Gupta, Jagadeesan and Panangaden, of the corresponding pair of states of the labelled Markov chain. Bacci, Bacci, Larsen and Mardare introduced an algorithm to compute the probabilistic bisimilarity distances for a labelled Markov chain. A modification of a basic version of their algorithm for a labelled Markov chain is shown to be the policy iteration algorithm applied to the corresponding simple stochastic game. Furthermore, it is shown that this algorithm takes exponential time in the worst case. Finally, experimental results confirm that it performs better in practice than other algorithms to compute the probabilistic bisimilarity distances.
Joint work with Qiyi Tang.
Franck received his MSc from Eindhoven University of Technology and carried out his PhD research at the Dutch Centre for Mathematics and Computer Science and the Free University Amsterdam. Before joining York University, he spent some time at McGill University, the University of Pisa and the Danish Institute on Basic Research in Computer Science. In 2004-05, he was a visiting scholar at the University of Cambridge. In 2011-13, he was a Leverhulme visiting professor at the University of Oxford. His research interests include concurrency theory and verification.