Researchers win CAV 2025 Paper Award for work on model checking
Posted: 4th August 2025
DPhil student Zainab Fatmi, Professor Stefan Kiefer and Professor David Parker, together with Professor Franck van Breugel from York University (Toronto, Canada) have won a Distinguished Paper Award at this year’s Computer-Aided Verification conference. Their paper develops a new method for simplifying probabilistic models in formal verification.
Model checking is a way to automatically check that a system (hardware or software) meets a desired property. The model checker takes as input a model of the system, represented as an automaton or transition system, and a property. A classic challenge in model checking is the state-space explosion problem: as a system grows, the number of possible states in the model can grow exponentially, making exhaustive checking infeasible. Modern systems can generate a model with an enormous number of states and, in practice, a model checker may run out of time or memory before exploring them all.
One method that is used to shrink the size of the state space is to merge states that are probabilistic bisimilar (i.e. behave equivalently) to obtain a minimised model that preserves the properties of the original model. However, probabilistic bisimilarity has a subtle but serious drawback: it may not be robust with respect to the transition probabilities of the model. In practice, these probabilities are often approximated. Even tiny changes in these probabilities can break equivalence: this means that two states might be probabilistic bisimilar and thus behave the same, but if the probabilities shift slightly then they behave drastically differently. When the transition probabilities are not known precisely, merging such states could risk leading to incorrect model checking results.
To address this, the paper defines a new notion of equivalence called robust probabilistic bisimilarity. Intuitively, this ensures that if two states are deemed equivalent, they remain almost/approximately equivalent under small probability changes. Robust probabilistic bisimilarity addresses a broader concern in formal verification: ensuring that small modelling errors do not cause major verification failures. This is increasingly important as systems become more complex and data-driven. In fact, robustness is a hot topic across computing today. For example, in AI, small input variations can cause LLMs to generate dramatically incorrect responses (so-called hallucinations). In a similar spirit, the authors plan to continue this research to improve the correctness and reliability of the computer systems we depend on every day.